Test Report: Docker_Linux_docker_arm64 19531

                    
                      cca1ca437c91fbc205ce13fbbdef95295053f0ce:2024-08-29:35997
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 74.53
x
+
TestAddons/parallel/Registry (74.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.787473ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-dlgj9" [02494a1f-30ad-4cf7-a0d8-5942ff632fdb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004482574s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-p65z4" [c30311f9-7059-4c57-b652-d784a14e1d37] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004266841s
addons_test.go:342: (dbg) Run:  kubectl --context addons-399511 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-399511 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-399511 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.124049131s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-399511 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-399511
helpers_test.go:235: (dbg) docker inspect addons-399511:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a2493992fb94ea49d06e7a6043d9c0b3cc933620c113df9c9f4c974f8657ae8",
	        "Created": "2024-08-29T18:06:25.243036336Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-29T18:06:25.426933273Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cc8dc59c2b679153d99f84cc70dab3e87225f8a0d04f61969b54714a9c4cd4d",
	        "ResolvConfPath": "/var/lib/docker/containers/1a2493992fb94ea49d06e7a6043d9c0b3cc933620c113df9c9f4c974f8657ae8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a2493992fb94ea49d06e7a6043d9c0b3cc933620c113df9c9f4c974f8657ae8/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a2493992fb94ea49d06e7a6043d9c0b3cc933620c113df9c9f4c974f8657ae8/hosts",
	        "LogPath": "/var/lib/docker/containers/1a2493992fb94ea49d06e7a6043d9c0b3cc933620c113df9c9f4c974f8657ae8/1a2493992fb94ea49d06e7a6043d9c0b3cc933620c113df9c9f4c974f8657ae8-json.log",
	        "Name": "/addons-399511",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-399511:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-399511",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ed1d2ac5867bebd9fd2bc183c4c3ff08d2614d91094586cb3ac8e749341c9a3b-init/diff:/var/lib/docker/overlay2/885d06ac3812e778e7d607473d2b4ffd327aef33116438d7c4f388856940402c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed1d2ac5867bebd9fd2bc183c4c3ff08d2614d91094586cb3ac8e749341c9a3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed1d2ac5867bebd9fd2bc183c4c3ff08d2614d91094586cb3ac8e749341c9a3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed1d2ac5867bebd9fd2bc183c4c3ff08d2614d91094586cb3ac8e749341c9a3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-399511",
	                "Source": "/var/lib/docker/volumes/addons-399511/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-399511",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-399511",
	                "name.minikube.sigs.k8s.io": "addons-399511",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4fe5f7fd445c4dfeefd31b2f788ca73a1f495b1133e83acde9f683e4303f4b77",
	            "SandboxKey": "/var/run/docker/netns/4fe5f7fd445c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-399511": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2a3d061e7d2ad0f78b34afa482d56e7e56bd6e8411881fd68c1adde32db5f4e4",
	                    "EndpointID": "bdd7b6788663d492f7ddc0d0c7988b457ba4c8e3f116abdf2931bb3524712546",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-399511",
	                        "1a2493992fb9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-399511 -n addons-399511
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 logs -n 25: (1.553515857s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-111357              | download-only-111357   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only              | download-only-563162   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-563162              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-563162              | download-only-563162   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-111357              | download-only-111357   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-563162              | download-only-563162   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                   | download-docker-299649 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | download-docker-299649               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-299649            | download-docker-299649 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                   | binary-mirror-832860   | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | binary-mirror-832860                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42491               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-832860              | binary-mirror-832860   | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| addons  | enable dashboard -p                  | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-399511                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-399511                        |                        |         |         |                     |                     |
	| start   | -p addons-399511 --wait=true         | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:09 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-399511 addons disable         | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:10 UTC | 29 Aug 24 18:10 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-399511 addons                 | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-399511 addons                 | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-399511 addons                 | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | addons-399511                        |                        |         |         |                     |                     |
	| ssh     | addons-399511 ssh curl -s            | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-399511 ip                     | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	| ip      | addons-399511 ip                     | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	| addons  | addons-399511 addons disable         | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-399511 addons disable         | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-399511 addons disable         | addons-399511          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC |                     |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:06:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:06:01.057270    8357 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:06:01.057462    8357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:01.057492    8357 out.go:358] Setting ErrFile to fd 2...
	I0829 18:06:01.057511    8357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:01.057765    8357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:06:01.058260    8357 out.go:352] Setting JSON to false
	I0829 18:06:01.059082    8357 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2902,"bootTime":1724951859,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0829 18:06:01.059202    8357 start.go:139] virtualization:  
	I0829 18:06:01.061561    8357 out.go:177] * [addons-399511] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0829 18:06:01.064632    8357 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:06:01.064752    8357 notify.go:220] Checking for updates...
	I0829 18:06:01.069543    8357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:06:01.072118    8357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	I0829 18:06:01.074245    8357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	I0829 18:06:01.076230    8357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0829 18:06:01.078148    8357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:06:01.080376    8357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:06:01.105912    8357 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:06:01.106030    8357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:06:01.172597    8357 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-29 18:06:01.162430799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:06:01.172711    8357 docker.go:307] overlay module found
	I0829 18:06:01.176477    8357 out.go:177] * Using the docker driver based on user configuration
	I0829 18:06:01.178808    8357 start.go:297] selected driver: docker
	I0829 18:06:01.178834    8357 start.go:901] validating driver "docker" against <nil>
	I0829 18:06:01.178851    8357 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:06:01.179805    8357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:06:01.233356    8357 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-29 18:06:01.22400613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:06:01.233536    8357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:06:01.233771    8357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:01.235832    8357 out.go:177] * Using Docker driver with root privileges
	I0829 18:06:01.237934    8357 cni.go:84] Creating CNI manager for ""
	I0829 18:06:01.237976    8357 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:01.237989    8357 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:06:01.238076    8357 start.go:340] cluster config:
	{Name:addons-399511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-399511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:01.240083    8357 out.go:177] * Starting "addons-399511" primary control-plane node in "addons-399511" cluster
	I0829 18:06:01.241883    8357 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:06:01.243850    8357 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:06:01.245710    8357 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:01.245847    8357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-2266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 18:06:01.245858    8357 cache.go:56] Caching tarball of preloaded images
	I0829 18:06:01.245747    8357 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:06:01.245938    8357 preload.go:172] Found /home/jenkins/minikube-integration/19531-2266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 18:06:01.245948    8357 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 18:06:01.246299    8357 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/config.json ...
	I0829 18:06:01.246326    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/config.json: {Name:mk5017a80a6a77945a8b8c3ca7ab5651dc945772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:01.262392    8357 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:06:01.262522    8357 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:06:01.262541    8357 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0829 18:06:01.262545    8357 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0829 18:06:01.262553    8357 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0829 18:06:01.262559    8357 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0829 18:06:18.771593    8357 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0829 18:06:18.771631    8357 cache.go:194] Successfully downloaded all kic artifacts
	I0829 18:06:18.771669    8357 start.go:360] acquireMachinesLock for addons-399511: {Name:mkb635beca5aaf5bacff8e9dae6807018de61149 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:18.771786    8357 start.go:364] duration metric: took 94.398µs to acquireMachinesLock for "addons-399511"
	I0829 18:06:18.771816    8357 start.go:93] Provisioning new machine with config: &{Name:addons-399511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-399511 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:06:18.771893    8357 start.go:125] createHost starting for "" (driver="docker")
	I0829 18:06:18.774730    8357 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0829 18:06:18.774983    8357 start.go:159] libmachine.API.Create for "addons-399511" (driver="docker")
	I0829 18:06:18.775016    8357 client.go:168] LocalClient.Create starting
	I0829 18:06:18.775129    8357 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca.pem
	I0829 18:06:19.148494    8357 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/cert.pem
	I0829 18:06:19.669970    8357 cli_runner.go:164] Run: docker network inspect addons-399511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0829 18:06:19.687407    8357 cli_runner.go:211] docker network inspect addons-399511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0829 18:06:19.687502    8357 network_create.go:284] running [docker network inspect addons-399511] to gather additional debugging logs...
	I0829 18:06:19.687522    8357 cli_runner.go:164] Run: docker network inspect addons-399511
	W0829 18:06:19.704097    8357 cli_runner.go:211] docker network inspect addons-399511 returned with exit code 1
	I0829 18:06:19.704133    8357 network_create.go:287] error running [docker network inspect addons-399511]: docker network inspect addons-399511: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-399511 not found
	I0829 18:06:19.704146    8357 network_create.go:289] output of [docker network inspect addons-399511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-399511 not found
	
	** /stderr **
	I0829 18:06:19.704248    8357 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:06:19.724020    8357 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400174de60}
	I0829 18:06:19.724063    8357 network_create.go:124] attempt to create docker network addons-399511 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0829 18:06:19.724124    8357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-399511 addons-399511
	I0829 18:06:19.798525    8357 network_create.go:108] docker network addons-399511 192.168.49.0/24 created
	I0829 18:06:19.798557    8357 kic.go:121] calculated static IP "192.168.49.2" for the "addons-399511" container
	I0829 18:06:19.798628    8357 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0829 18:06:19.813562    8357 cli_runner.go:164] Run: docker volume create addons-399511 --label name.minikube.sigs.k8s.io=addons-399511 --label created_by.minikube.sigs.k8s.io=true
	I0829 18:06:19.830625    8357 oci.go:103] Successfully created a docker volume addons-399511
	I0829 18:06:19.830722    8357 cli_runner.go:164] Run: docker run --rm --name addons-399511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399511 --entrypoint /usr/bin/test -v addons-399511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0829 18:06:21.353310    8357 cli_runner.go:217] Completed: docker run --rm --name addons-399511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399511 --entrypoint /usr/bin/test -v addons-399511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (1.522530428s)
	I0829 18:06:21.353352    8357 oci.go:107] Successfully prepared a docker volume addons-399511
	I0829 18:06:21.353375    8357 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:21.353401    8357 kic.go:194] Starting extracting preloaded images to volume ...
	I0829 18:06:21.353552    8357 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-2266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-399511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0829 18:06:25.167424    8357 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-2266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-399511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (3.81382999s)
	I0829 18:06:25.167461    8357 kic.go:203] duration metric: took 3.814061633s to extract preloaded images to volume ...
	W0829 18:06:25.167618    8357 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0829 18:06:25.167762    8357 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0829 18:06:25.227891    8357 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-399511 --name addons-399511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-399511 --network addons-399511 --ip 192.168.49.2 --volume addons-399511:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0829 18:06:25.596928    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Running}}
	I0829 18:06:25.619506    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:25.642310    8357 cli_runner.go:164] Run: docker exec addons-399511 stat /var/lib/dpkg/alternatives/iptables
	I0829 18:06:25.711871    8357 oci.go:144] the created container "addons-399511" has a running status.
	I0829 18:06:25.711904    8357 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa...
	I0829 18:06:26.171022    8357 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0829 18:06:26.207924    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:26.229109    8357 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0829 18:06:26.229130    8357 kic_runner.go:114] Args: [docker exec --privileged addons-399511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0829 18:06:26.313693    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:26.341100    8357 machine.go:93] provisionDockerMachine start ...
	I0829 18:06:26.341189    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:26.368587    8357 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:26.368860    8357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:26.368875    8357 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:06:26.528015    8357 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-399511
	
	I0829 18:06:26.528042    8357 ubuntu.go:169] provisioning hostname "addons-399511"
	I0829 18:06:26.528123    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:26.551532    8357 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:26.551769    8357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:26.551780    8357 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-399511 && echo "addons-399511" | sudo tee /etc/hostname
	I0829 18:06:26.704392    8357 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-399511
	
	I0829 18:06:26.704530    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:26.726908    8357 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:26.727188    8357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:26.727215    8357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-399511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-399511/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-399511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:06:26.864838    8357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:26.864887    8357 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19531-2266/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-2266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-2266/.minikube}
	I0829 18:06:26.864906    8357 ubuntu.go:177] setting up certificates
	I0829 18:06:26.864916    8357 provision.go:84] configureAuth start
	I0829 18:06:26.864979    8357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399511
	I0829 18:06:26.882372    8357 provision.go:143] copyHostCerts
	I0829 18:06:26.882457    8357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-2266/.minikube/ca.pem (1078 bytes)
	I0829 18:06:26.882585    8357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-2266/.minikube/cert.pem (1123 bytes)
	I0829 18:06:26.882655    8357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-2266/.minikube/key.pem (1679 bytes)
	I0829 18:06:26.882706    8357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-2266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca-key.pem org=jenkins.addons-399511 san=[127.0.0.1 192.168.49.2 addons-399511 localhost minikube]
	I0829 18:06:27.205290    8357 provision.go:177] copyRemoteCerts
	I0829 18:06:27.205362    8357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:06:27.205404    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:27.223002    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:27.325242    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:06:27.350800    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:06:27.375182    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:06:27.398616    8357 provision.go:87] duration metric: took 533.682659ms to configureAuth
	I0829 18:06:27.398644    8357 ubuntu.go:193] setting minikube options for container-runtime
	I0829 18:06:27.398832    8357 config.go:182] Loaded profile config "addons-399511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:27.398891    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:27.416276    8357 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:27.416540    8357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:27.416550    8357 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 18:06:27.548877    8357 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0829 18:06:27.548920    8357 ubuntu.go:71] root file system type: overlay
	I0829 18:06:27.549089    8357 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 18:06:27.549162    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:27.567366    8357 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:27.567623    8357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:27.567711    8357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 18:06:27.713420    8357 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 18:06:27.713517    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:27.730990    8357 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:27.731266    8357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:27.731291    8357 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 18:06:28.563808    8357 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-12 11:49:05.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-29 18:06:27.705152145 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0829 18:06:28.563845    8357 machine.go:96] duration metric: took 2.222726021s to provisionDockerMachine
	I0829 18:06:28.563857    8357 client.go:171] duration metric: took 9.788830573s to LocalClient.Create
	I0829 18:06:28.563870    8357 start.go:167] duration metric: took 9.788887293s to libmachine.API.Create "addons-399511"
	I0829 18:06:28.563877    8357 start.go:293] postStartSetup for "addons-399511" (driver="docker")
	I0829 18:06:28.563888    8357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:06:28.563971    8357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:06:28.564032    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:28.583652    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:28.681830    8357 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:06:28.685122    8357 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:06:28.685156    8357 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:06:28.685167    8357 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:06:28.685176    8357 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0829 18:06:28.685187    8357 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-2266/.minikube/addons for local assets ...
	I0829 18:06:28.685256    8357 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-2266/.minikube/files for local assets ...
	I0829 18:06:28.685280    8357 start.go:296] duration metric: took 121.397745ms for postStartSetup
	I0829 18:06:28.685603    8357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399511
	I0829 18:06:28.702040    8357 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/config.json ...
	I0829 18:06:28.702323    8357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:06:28.702381    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:28.719156    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:28.817892    8357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0829 18:06:28.823475    8357 start.go:128] duration metric: took 10.051567042s to createHost
	I0829 18:06:28.823503    8357 start.go:83] releasing machines lock for "addons-399511", held for 10.05169924s
	I0829 18:06:28.823578    8357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399511
	I0829 18:06:28.840979    8357 ssh_runner.go:195] Run: cat /version.json
	I0829 18:06:28.841031    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:28.841034    8357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:06:28.841101    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:28.861236    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:28.864570    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:29.179640    8357 ssh_runner.go:195] Run: systemctl --version
	I0829 18:06:29.184038    8357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:06:29.188438    8357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0829 18:06:29.216348    8357 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0829 18:06:29.216478    8357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:06:29.246988    8357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0829 18:06:29.247072    8357 start.go:495] detecting cgroup driver to use...
	I0829 18:06:29.247143    8357 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:06:29.247325    8357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:29.264134    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 18:06:29.274432    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 18:06:29.284726    8357 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 18:06:29.284853    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 18:06:29.295264    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:06:29.305747    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 18:06:29.315636    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:06:29.325611    8357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:06:29.335430    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 18:06:29.345772    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 18:06:29.356067    8357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 18:06:29.366571    8357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:06:29.375217    8357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:06:29.383668    8357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:29.466466    8357 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 18:06:29.559091    8357 start.go:495] detecting cgroup driver to use...
	I0829 18:06:29.559142    8357 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:06:29.559197    8357 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 18:06:29.581753    8357 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0829 18:06:29.581836    8357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 18:06:29.594554    8357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:29.613789    8357 ssh_runner.go:195] Run: which cri-dockerd
	I0829 18:06:29.617822    8357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 18:06:29.628001    8357 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 18:06:29.649600    8357 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 18:06:29.757205    8357 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 18:06:29.863328    8357 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 18:06:29.863560    8357 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 18:06:29.886455    8357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:29.983875    8357 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 18:06:30.369505    8357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 18:06:30.382674    8357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:30.395862    8357 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 18:06:30.492909    8357 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 18:06:30.589527    8357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:30.687998    8357 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 18:06:30.702191    8357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:30.713784    8357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:30.803261    8357 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 18:06:30.873559    8357 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 18:06:30.873672    8357 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 18:06:30.877457    8357 start.go:563] Will wait 60s for crictl version
	I0829 18:06:30.877538    8357 ssh_runner.go:195] Run: which crictl
	I0829 18:06:30.881573    8357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:06:30.920067    8357 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0829 18:06:30.920148    8357 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:06:30.946943    8357 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:06:30.972594    8357 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0829 18:06:30.972759    8357 cli_runner.go:164] Run: docker network inspect addons-399511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:06:30.987934    8357 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0829 18:06:30.991745    8357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:31.002915    8357 kubeadm.go:883] updating cluster {Name:addons-399511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-399511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:06:31.003044    8357 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:31.003104    8357 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:06:31.029125    8357 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:06:31.029149    8357 docker.go:615] Images already preloaded, skipping extraction
	I0829 18:06:31.029215    8357 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:06:31.047595    8357 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:06:31.047618    8357 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:06:31.047637    8357 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0829 18:06:31.047737    8357 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-399511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-399511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:06:31.047805    8357 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 18:06:31.099466    8357 cni.go:84] Creating CNI manager for ""
	I0829 18:06:31.099497    8357 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:31.099511    8357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:06:31.099531    8357 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-399511 NodeName:addons-399511 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:06:31.099673    8357 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-399511"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:06:31.099746    8357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:31.109289    8357 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:06:31.109361    8357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:06:31.118283    8357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 18:06:31.136825    8357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:06:31.155794    8357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0829 18:06:31.174376    8357 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0829 18:06:31.177905    8357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:31.188782    8357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:31.286445    8357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:31.300740    8357 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511 for IP: 192.168.49.2
	I0829 18:06:31.300764    8357 certs.go:194] generating shared ca certs ...
	I0829 18:06:31.300782    8357 certs.go:226] acquiring lock for ca certs: {Name:mk0cd65c4cfb15731dccb0c31c1b4d3fd964b734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:31.300915    8357 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-2266/.minikube/ca.key
	I0829 18:06:31.611433    8357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-2266/.minikube/ca.crt ...
	I0829 18:06:31.611467    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/ca.crt: {Name:mkd5e04ba807e2c09ea578b9f1a8c177bfd60607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:31.611662    8357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-2266/.minikube/ca.key ...
	I0829 18:06:31.611674    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/ca.key: {Name:mk80677551b9bc564573d91894c2806f9420e304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:31.611760    8357 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-2266/.minikube/proxy-client-ca.key
	I0829 18:06:32.383991    8357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-2266/.minikube/proxy-client-ca.crt ...
	I0829 18:06:32.384025    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/proxy-client-ca.crt: {Name:mk462c027d8811526b77856cfcf94fa3e3e60bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:32.384204    8357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-2266/.minikube/proxy-client-ca.key ...
	I0829 18:06:32.384215    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/proxy-client-ca.key: {Name:mk669fc3b522ef42819f692f9822a32103305037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:32.384308    8357 certs.go:256] generating profile certs ...
	I0829 18:06:32.384368    8357 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.key
	I0829 18:06:32.384385    8357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt with IP's: []
	I0829 18:06:32.785457    8357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt ...
	I0829 18:06:32.785502    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: {Name:mk3e074ec52188cc821465177f9f71b982ea8743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:32.785682    8357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.key ...
	I0829 18:06:32.785696    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.key: {Name:mk81780736efe613c1e362a570cc712c6e6c5fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:32.785772    8357 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.key.b0b8fe87
	I0829 18:06:32.785794    8357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.crt.b0b8fe87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0829 18:06:33.208571    8357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.crt.b0b8fe87 ...
	I0829 18:06:33.208604    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.crt.b0b8fe87: {Name:mkc4dd96f72ee1f8a792bf7fa4faf698d092e39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:33.208790    8357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.key.b0b8fe87 ...
	I0829 18:06:33.208805    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.key.b0b8fe87: {Name:mk2de57acffa382f273b82220942a93f79915b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:33.208888    8357 certs.go:381] copying /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.crt.b0b8fe87 -> /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.crt
	I0829 18:06:33.208980    8357 certs.go:385] copying /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.key.b0b8fe87 -> /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.key
	I0829 18:06:33.209037    8357 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.key
	I0829 18:06:33.209057    8357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.crt with IP's: []
	I0829 18:06:33.371584    8357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.crt ...
	I0829 18:06:33.371611    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.crt: {Name:mkd192cfb53a86200ac47b1c2b33fdd85c44ab0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:33.371786    8357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.key ...
	I0829 18:06:33.371799    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.key: {Name:mk7849686eafd95b7e7283a40fa33d9ac83dbdc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:33.371992    8357 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:06:33.372035    8357 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:06:33.372063    8357 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:06:33.372089    8357 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-2266/.minikube/certs/key.pem (1679 bytes)
	I0829 18:06:33.372781    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:06:33.398241    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:06:33.422506    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:06:33.446969    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:06:33.471747    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:06:33.496512    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:06:33.521933    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:06:33.546683    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:06:33.571502    8357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-2266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:06:33.596149    8357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:06:33.614158    8357 ssh_runner.go:195] Run: openssl version
	I0829 18:06:33.619487    8357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:06:33.628895    8357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:33.632224    8357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:33.632288    8357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:33.639142    8357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:06:33.648412    8357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:06:33.651585    8357 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:06:33.651631    8357 kubeadm.go:392] StartCluster: {Name:addons-399511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-399511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:33.651752    8357 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 18:06:33.668546    8357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:06:33.681071    8357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:06:33.696848    8357 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0829 18:06:33.696937    8357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:06:33.707477    8357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:06:33.707536    8357 kubeadm.go:157] found existing configuration files:
	
	I0829 18:06:33.707614    8357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:06:33.717074    8357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:06:33.717150    8357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:06:33.726250    8357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:06:33.735599    8357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:06:33.735706    8357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:06:33.744398    8357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:06:33.754160    8357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:06:33.754269    8357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:06:33.762604    8357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:06:33.770929    8357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:06:33.771013    8357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:06:33.779878    8357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0829 18:06:33.823786    8357 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:06:33.824135    8357 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:06:33.845593    8357 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0829 18:06:33.845700    8357 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0829 18:06:33.845756    8357 kubeadm.go:310] OS: Linux
	I0829 18:06:33.845815    8357 kubeadm.go:310] CGROUPS_CPU: enabled
	I0829 18:06:33.845880    8357 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0829 18:06:33.845963    8357 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0829 18:06:33.846031    8357 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0829 18:06:33.846105    8357 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0829 18:06:33.846174    8357 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0829 18:06:33.846246    8357 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0829 18:06:33.846312    8357 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0829 18:06:33.846384    8357 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0829 18:06:33.911957    8357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:06:33.912106    8357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:06:33.912260    8357 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:06:33.927970    8357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:06:33.931460    8357 out.go:235]   - Generating certificates and keys ...
	I0829 18:06:33.931649    8357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:06:33.931751    8357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:06:34.241243    8357 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:06:34.884624    8357 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:06:35.400257    8357 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:06:35.704923    8357 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:06:36.863392    8357 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:06:36.863674    8357 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-399511 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:06:37.180468    8357 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:06:37.180621    8357 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-399511 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:06:37.655742    8357 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:06:38.042150    8357 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:06:38.581225    8357 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:06:38.581579    8357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:06:39.794258    8357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:40.168650    8357 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:40.435872    8357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:41.581603    8357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:42.346627    8357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:42.348192    8357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:42.354226    8357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:42.357171    8357 out.go:235]   - Booting up control plane ...
	I0829 18:06:42.357292    8357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:42.357382    8357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:42.357460    8357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:42.371338    8357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:42.378595    8357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:42.378744    8357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:42.482885    8357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:42.483010    8357 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:43.483261    8357 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001708295s
	I0829 18:06:43.483352    8357 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:49.985518    8357 kubeadm.go:310] [api-check] The API server is healthy after 6.502183822s
	I0829 18:06:50.005216    8357 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:50.038347    8357 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:50.071439    8357 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:50.071632    8357 kubeadm.go:310] [mark-control-plane] Marking the node addons-399511 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:50.083041    8357 kubeadm.go:310] [bootstrap-token] Using token: aaq20s.avu7035j3ul2w3pr
	I0829 18:06:50.085644    8357 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:50.085786    8357 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:50.090581    8357 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:50.101144    8357 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:50.105718    8357 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:50.109757    8357 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:50.114240    8357 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:50.391759    8357 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:50.820415    8357 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:51.391924    8357 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:51.393252    8357 kubeadm.go:310] 
	I0829 18:06:51.393323    8357 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:51.393329    8357 kubeadm.go:310] 
	I0829 18:06:51.393414    8357 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:51.393419    8357 kubeadm.go:310] 
	I0829 18:06:51.393443    8357 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:51.393499    8357 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:51.393549    8357 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:51.393554    8357 kubeadm.go:310] 
	I0829 18:06:51.393606    8357 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:51.393610    8357 kubeadm.go:310] 
	I0829 18:06:51.393655    8357 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:51.393660    8357 kubeadm.go:310] 
	I0829 18:06:51.393710    8357 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:51.393789    8357 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:51.393856    8357 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:51.393860    8357 kubeadm.go:310] 
	I0829 18:06:51.393948    8357 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:51.394023    8357 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:51.394028    8357 kubeadm.go:310] 
	I0829 18:06:51.394109    8357 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token aaq20s.avu7035j3ul2w3pr \
	I0829 18:06:51.394209    8357 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a0c2755ec48e5f469ecfb39ba7372f0016b76afb44c8931bb912d2051715f819 \
	I0829 18:06:51.394229    8357 kubeadm.go:310] 	--control-plane 
	I0829 18:06:51.394233    8357 kubeadm.go:310] 
	I0829 18:06:51.394315    8357 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:51.394319    8357 kubeadm.go:310] 
	I0829 18:06:51.394398    8357 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token aaq20s.avu7035j3ul2w3pr \
	I0829 18:06:51.394496    8357 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a0c2755ec48e5f469ecfb39ba7372f0016b76afb44c8931bb912d2051715f819 
	I0829 18:06:51.398216    8357 kubeadm.go:310] W0829 18:06:33.820177    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:51.398507    8357 kubeadm.go:310] W0829 18:06:33.821213    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:51.398717    8357 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0829 18:06:51.398825    8357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:51.398842    8357 cni.go:84] Creating CNI manager for ""
	I0829 18:06:51.398855    8357 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:51.400861    8357 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:06:51.402532    8357 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:06:51.411845    8357 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:06:51.430678    8357 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:51.430761    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:51.430798    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-399511 minikube.k8s.io/updated_at=2024_08_29T18_06_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-399511 minikube.k8s.io/primary=true
	I0829 18:06:51.448965    8357 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:51.586119    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.086839    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.586797    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.087050    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.586324    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.087184    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.586822    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.086894    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.586374    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.086432    8357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.190480    8357 kubeadm.go:1113] duration metric: took 4.759790241s to wait for elevateKubeSystemPrivileges
	I0829 18:06:56.190519    8357 kubeadm.go:394] duration metric: took 22.53888865s to StartCluster
	I0829 18:06:56.190546    8357 settings.go:142] acquiring lock: {Name:mkbc3c861e50da6e3fd7c849628042d37388c0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.190687    8357 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-2266/kubeconfig
	I0829 18:06:56.191094    8357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/kubeconfig: {Name:mk3f4efcecfc05616550dcdc5ae840bfb00044d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.191350    8357 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:06:56.191471    8357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:56.191768    8357 config.go:182] Loaded profile config "addons-399511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:56.191752    8357 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:56.191858    8357 addons.go:69] Setting inspektor-gadget=true in profile "addons-399511"
	I0829 18:06:56.191865    8357 addons.go:69] Setting metrics-server=true in profile "addons-399511"
	I0829 18:06:56.191894    8357 addons.go:234] Setting addon metrics-server=true in "addons-399511"
	I0829 18:06:56.191858    8357 addons.go:69] Setting yakd=true in profile "addons-399511"
	I0829 18:06:56.191924    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.191940    8357 addons.go:234] Setting addon yakd=true in "addons-399511"
	I0829 18:06:56.191974    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.192479    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.192489    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.193002    8357 addons.go:69] Setting cloud-spanner=true in profile "addons-399511"
	I0829 18:06:56.193045    8357 addons.go:234] Setting addon cloud-spanner=true in "addons-399511"
	I0829 18:06:56.193100    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.193591    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.195867    8357 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-399511"
	I0829 18:06:56.196596    8357 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-399511"
	I0829 18:06:56.196661    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.197377    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.191896    8357 addons.go:234] Setting addon inspektor-gadget=true in "addons-399511"
	I0829 18:06:56.197915    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.198388    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.196429    8357 addons.go:69] Setting gcp-auth=true in profile "addons-399511"
	I0829 18:06:56.203546    8357 mustload.go:65] Loading cluster: addons-399511
	I0829 18:06:56.203736    8357 config.go:182] Loaded profile config "addons-399511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:56.203995    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.196416    8357 addons.go:69] Setting default-storageclass=true in profile "addons-399511"
	I0829 18:06:56.208203    8357 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-399511"
	I0829 18:06:56.208735    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.196434    8357 addons.go:69] Setting ingress=true in profile "addons-399511"
	I0829 18:06:56.211277    8357 addons.go:234] Setting addon ingress=true in "addons-399511"
	I0829 18:06:56.211329    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.211772    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.196507    8357 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:56.220612    8357 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-399511"
	I0829 18:06:56.196438    8357 addons.go:69] Setting ingress-dns=true in profile "addons-399511"
	I0829 18:06:56.220801    8357 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-399511"
	I0829 18:06:56.220877    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.221403    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.221565    8357 addons.go:234] Setting addon ingress-dns=true in "addons-399511"
	I0829 18:06:56.221615    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.222011    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.237357    8357 addons.go:69] Setting registry=true in profile "addons-399511"
	I0829 18:06:56.237403    8357 addons.go:234] Setting addon registry=true in "addons-399511"
	I0829 18:06:56.237442    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.238001    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.258979    8357 addons.go:69] Setting storage-provisioner=true in profile "addons-399511"
	I0829 18:06:56.259036    8357 addons.go:234] Setting addon storage-provisioner=true in "addons-399511"
	I0829 18:06:56.259074    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.259545    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.281280    8357 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-399511"
	I0829 18:06:56.281332    8357 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-399511"
	I0829 18:06:56.281656    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.284378    8357 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:56.286081    8357 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:56.286105    8357 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:56.286177    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.313260    8357 addons.go:69] Setting volcano=true in profile "addons-399511"
	I0829 18:06:56.328639    8357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:56.339785    8357 addons.go:69] Setting volumesnapshots=true in profile "addons-399511"
	I0829 18:06:56.339871    8357 addons.go:234] Setting addon volumesnapshots=true in "addons-399511"
	I0829 18:06:56.339926    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.342993    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.361539    8357 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:56.361761    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.377562    8357 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:56.377585    8357 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:56.378275    8357 addons.go:234] Setting addon volcano=true in "addons-399511"
	I0829 18:06:56.378351    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.378793    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:56.379034    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.386111    8357 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:56.386478    8357 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:56.386492    8357 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:56.386566    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.392706    8357 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:56.392727    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:56.392790    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.395347    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:56.395530    8357 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:56.395554    8357 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:56.395644    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.434104    8357 addons.go:234] Setting addon default-storageclass=true in "addons-399511"
	I0829 18:06:56.434193    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.434655    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.439787    8357 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.439969    8357 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:56.451557    8357 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.456403    8357 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:56.456429    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:56.456506    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.465093    8357 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-399511"
	I0829 18:06:56.465182    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:06:56.465657    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:06:56.480482    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:56.483667    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:56.484721    8357 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:56.516876    8357 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:56.501269    8357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:56.517209    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:56.524573    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:56.526501    8357 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:56.526524    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:56.526590    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.531718    8357 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:56.531778    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:56.531855    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.545090    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.550631    8357 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:56.552718    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:56.554088    8357 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:56.554146    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:56.554259    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.571392    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:56.576514    8357 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:56.576614    8357 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:56.576742    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.603627    8357 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:56.604944    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.605682    8357 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:56.605700    8357 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:56.605776    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.608885    8357 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:56.609042    8357 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0829 18:06:56.610709    8357 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:56.610728    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:56.610792    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.622868    8357 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0829 18:06:56.628434    8357 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0829 18:06:56.631299    8357 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:56.631328    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0829 18:06:56.631398    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.639212    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.642406    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.684804    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.687006    8357 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:56.687079    8357 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:56.687382    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.695981    8357 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:56.697641    8357 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:56.699318    8357 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:56.699339    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:56.699409    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:06:56.743888    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.748588    8357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:56.767504    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.780842    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.807285    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.808337    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.808987    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.824528    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.825381    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:06:56.825739    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	W0829 18:06:56.840836    8357 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:56.840866    8357 retry.go:31] will retry after 283.45828ms: ssh: handshake failed: EOF
	I0829 18:06:57.051368    8357 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:57.051408    8357 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:57.221534    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:57.275699    8357 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.275721    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:57.343925    8357 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:57.343949    8357 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:57.344565    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:57.445320    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:57.464415    8357 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:57.464441    8357 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:57.490035    8357 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:57.490060    8357 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:57.499786    8357 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:57.499813    8357 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:57.524888    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.530655    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:57.531676    8357 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:57.531698    8357 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:57.537553    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:57.630579    8357 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:57.630601    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:57.669423    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:57.676270    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:57.691072    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:57.787194    8357 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:57.787263    8357 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:57.808921    8357 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:57.809000    8357 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:57.841639    8357 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:57.841714    8357 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:57.981598    8357 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:57.981672    8357 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:58.177020    8357 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:58.177097    8357 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:58.333223    8357 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.816155185s)
	I0829 18:06:58.333314    8357 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:58.333245    8357 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.584628607s)
	I0829 18:06:58.334185    8357 node_ready.go:35] waiting up to 6m0s for node "addons-399511" to be "Ready" ...
	I0829 18:06:58.340426    8357 node_ready.go:49] node "addons-399511" has status "Ready":"True"
	I0829 18:06:58.340500    8357 node_ready.go:38] duration metric: took 6.286126ms for node "addons-399511" to be "Ready" ...
	I0829 18:06:58.340526    8357 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:58.364494    8357 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7tnxm" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:58.376059    8357 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:58.376132    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:58.421780    8357 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:58.421852    8357 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:58.516210    8357 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:58.516283    8357 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:58.538423    8357 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:58.538497    8357 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:58.707886    8357 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:58.707914    8357 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:58.847921    8357 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-399511" context rescaled to 1 replicas
	I0829 18:06:58.877310    8357 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:58.877404    8357 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:58.882298    8357 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:58.882360    8357 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:58.898313    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:59.111039    8357 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:59.111136    8357 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:59.128049    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:59.197931    8357 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:59.198005    8357 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:59.253603    8357 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:59.253691    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:59.294365    8357 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:59.294431    8357 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:59.400221    8357 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:59.400290    8357 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:59.441564    8357 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:59.441629    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:59.489903    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:59.790869    8357 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:59.790945    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:59.838524    8357 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:59.838598    8357 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:07:00.178334    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:07:00.364434    8357 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:07:00.364476    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:07:00.389255    8357 pod_ready.go:103] pod "coredns-6f6b679f8f-7tnxm" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:00.919635    8357 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:07:00.919663    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:07:01.291025    8357 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:07:01.291048    8357 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:07:02.383371    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:07:02.871024    8357 pod_ready.go:103] pod "coredns-6f6b679f8f-7tnxm" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:03.408595    8357 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:07:03.408674    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:07:03.435546    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:07:03.870428    8357 pod_ready.go:93] pod "coredns-6f6b679f8f-7tnxm" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:03.870450    8357 pod_ready.go:82] duration metric: took 5.505879507s for pod "coredns-6f6b679f8f-7tnxm" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:03.870461    8357 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tcmj6" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:04.369471    8357 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:07:04.438545    8357 addons.go:234] Setting addon gcp-auth=true in "addons-399511"
	I0829 18:07:04.438644    8357 host.go:66] Checking if "addons-399511" exists ...
	I0829 18:07:04.439248    8357 cli_runner.go:164] Run: docker container inspect addons-399511 --format={{.State.Status}}
	I0829 18:07:04.468707    8357 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:07:04.468760    8357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399511
	I0829 18:07:04.513776    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/addons-399511/id_rsa Username:docker}
	I0829 18:07:04.877422    8357 pod_ready.go:98] pod "coredns-6f6b679f8f-tcmj6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:06:58 +0000 UTC,FinishedAt:2024-08-29 18:07:03 +0000 UTC,ContainerID:docker://640e6bc6b4037866ea4aa8ff384b4873c77d67e76c8d3c0f9cf3c489a7d46fdd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://640e6bc6b4037866ea4aa8ff384b4873c77d67e76c8d3c0f9cf3c489a7d46fdd Started:0x40022f6ff0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40023b0a70} {Name:kube-api-access-59sjq MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x40023b0a80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:04.877451    8357 pod_ready.go:82] duration metric: took 1.006981573s for pod "coredns-6f6b679f8f-tcmj6" in "kube-system" namespace to be "Ready" ...
	E0829 18:07:04.877463    8357 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-tcmj6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:06:58 +0000 UTC,FinishedAt:2024-08-29 18:07:03 +0000 UTC,ContainerID:docker://640e6bc6b4037866ea4aa8ff384b4873c77d67e76c8d3c0f9cf3c489a7d46fdd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://640e6bc6b4037866ea4aa8ff384b4873c77d67e76c8d3c0f9cf3c489a7d46fdd Started:0x40022f6ff0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40023b0a70} {Name:kube-api-access-59sjq MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x40023b0a80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:04.877472    8357 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:04.883233    8357 pod_ready.go:93] pod "etcd-addons-399511" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:04.883260    8357 pod_ready.go:82] duration metric: took 5.780699ms for pod "etcd-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:04.883273    8357 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:04.889505    8357 pod_ready.go:93] pod "kube-apiserver-addons-399511" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:04.889527    8357 pod_ready.go:82] duration metric: took 6.247579ms for pod "kube-apiserver-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:04.889540    8357 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:04.901722    8357 pod_ready.go:93] pod "kube-controller-manager-addons-399511" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:04.901757    8357 pod_ready.go:82] duration metric: took 12.200485ms for pod "kube-controller-manager-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:04.901771    8357 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whsvz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:05.093857    8357 pod_ready.go:93] pod "kube-proxy-whsvz" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:05.093931    8357 pod_ready.go:82] duration metric: took 192.153592ms for pod "kube-proxy-whsvz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:05.093958    8357 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:05.468764    8357 pod_ready.go:93] pod "kube-scheduler-addons-399511" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:05.468790    8357 pod_ready.go:82] duration metric: took 374.811184ms for pod "kube-scheduler-addons-399511" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:05.468802    8357 pod_ready.go:39] duration metric: took 7.128250471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:05.468820    8357 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:05.468887    8357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:07.079899    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.634551584s)
	I0829 18:07:07.080110    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.555195473s)
	I0829 18:07:07.080136    8357 addons.go:475] Verifying addon registry=true in "addons-399511"
	I0829 18:07:07.080336    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.735266758s)
	I0829 18:07:07.080454    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.549776457s)
	I0829 18:07:07.080513    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.542932844s)
	I0829 18:07:07.080598    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.411108414s)
	I0829 18:07:07.080664    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.859065355s)
	I0829 18:07:07.080691    8357 addons.go:475] Verifying addon ingress=true in "addons-399511"
	I0829 18:07:07.082594    8357 out.go:177] * Verifying registry addon...
	I0829 18:07:07.082726    8357 out.go:177] * Verifying ingress addon...
	I0829 18:07:07.085593    8357 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:07:07.086695    8357 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:07:07.097484    8357 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:07:07.097505    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.098239    8357 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:07:07.098247    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.595259    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.597165    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.091450    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.092842    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.622447    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.623086    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.182385    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.182975    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.616051    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.620710    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.650906    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.974456571s)
	I0829 18:07:09.650988    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.959848403s)
	I0829 18:07:09.651248    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.752856252s)
	I0829 18:07:09.651532    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.523410994s)
	I0829 18:07:09.651551    8357 addons.go:475] Verifying addon metrics-server=true in "addons-399511"
	I0829 18:07:09.651633    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.161652901s)
	W0829 18:07:09.651672    8357 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:09.651690    8357 retry.go:31] will retry after 133.971435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:09.651770    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.473393619s)
	I0829 18:07:09.653406    8357 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-399511 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:07:09.786028    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:07:10.206570    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.207088    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.612386    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.612681    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.806739    8357 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.338002057s)
	I0829 18:07:10.806876    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.423475241s)
	I0829 18:07:10.806905    8357 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-399511"
	I0829 18:07:10.806832    8357 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.337923622s)
	I0829 18:07:10.807033    8357 api_server.go:72] duration metric: took 14.61564909s to wait for apiserver process to appear ...
	I0829 18:07:10.807042    8357 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:10.807061    8357 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0829 18:07:10.809081    8357 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:07:10.809081    8357 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:07:10.811149    8357 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:07:10.812143    8357 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:07:10.812729    8357 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:07:10.812752    8357 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:07:10.820797    8357 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0829 18:07:10.826252    8357 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:10.826292    8357 api_server.go:131] duration metric: took 19.242502ms to wait for apiserver health ...
	I0829 18:07:10.826301    8357 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:10.828070    8357 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:07:10.828101    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.836451    8357 system_pods.go:59] 17 kube-system pods found
	I0829 18:07:10.836498    8357 system_pods.go:61] "coredns-6f6b679f8f-7tnxm" [1b131ed7-0bf5-4698-b93d-7108c905d203] Running
	I0829 18:07:10.836509    8357 system_pods.go:61] "csi-hostpath-attacher-0" [2a42da62-3a13-41e4-97cf-7a296eda43f7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.836519    8357 system_pods.go:61] "csi-hostpath-resizer-0" [8d3f91a7-5238-4fb5-a3fa-681d9f6ad11e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.836528    8357 system_pods.go:61] "csi-hostpathplugin-ccx6j" [f640eeb2-5e92-48e1-8a21-b7a16f534e95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.836534    8357 system_pods.go:61] "etcd-addons-399511" [3b0b9745-f1c3-4ccb-a159-d01b5748913a] Running
	I0829 18:07:10.836540    8357 system_pods.go:61] "kube-apiserver-addons-399511" [29df537f-ddef-498e-80a6-e551ac8c68df] Running
	I0829 18:07:10.836549    8357 system_pods.go:61] "kube-controller-manager-addons-399511" [ab52d01d-7aa4-4822-882a-c4874d38070c] Running
	I0829 18:07:10.836555    8357 system_pods.go:61] "kube-ingress-dns-minikube" [a3fb70ac-15b5-4566-bc99-dd80b676c940] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.836559    8357 system_pods.go:61] "kube-proxy-whsvz" [3c657ad9-4239-4af9-b152-d4e3d193fb5e] Running
	I0829 18:07:10.836569    8357 system_pods.go:61] "kube-scheduler-addons-399511" [af0c76ac-9eb8-4499-b930-466bf2ef4863] Running
	I0829 18:07:10.836575    8357 system_pods.go:61] "metrics-server-8988944d9-bxwfw" [cfb12d61-e779-43c1-b037-4307e16e276b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.836582    8357 system_pods.go:61] "nvidia-device-plugin-daemonset-dqrjn" [5522a38d-8785-471b-8ec0-3e5d151909ad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.836588    8357 system_pods.go:61] "registry-6fb4cdfc84-dlgj9" [02494a1f-30ad-4cf7-a0d8-5942ff632fdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.836594    8357 system_pods.go:61] "registry-proxy-p65z4" [c30311f9-7059-4c57-b652-d784a14e1d37] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.836601    8357 system_pods.go:61] "snapshot-controller-56fcc65765-gspj6" [3067aad6-a75a-4214-a757-0b08cd4867a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.836615    8357 system_pods.go:61] "snapshot-controller-56fcc65765-vd2rb" [e1869661-3f46-435f-a5c0-24b10a78b266] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.836620    8357 system_pods.go:61] "storage-provisioner" [72464b6f-a748-4e2f-9aeb-d54ba2906bf5] Running
	I0829 18:07:10.836626    8357 system_pods.go:74] duration metric: took 10.319405ms to wait for pod list to return data ...
	I0829 18:07:10.836634    8357 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:07:10.842615    8357 default_sa.go:45] found service account: "default"
	I0829 18:07:10.842646    8357 default_sa.go:55] duration metric: took 6.005451ms for default service account to be created ...
	I0829 18:07:10.842658    8357 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:07:10.852226    8357 system_pods.go:86] 17 kube-system pods found
	I0829 18:07:10.852269    8357 system_pods.go:89] "coredns-6f6b679f8f-7tnxm" [1b131ed7-0bf5-4698-b93d-7108c905d203] Running
	I0829 18:07:10.852280    8357 system_pods.go:89] "csi-hostpath-attacher-0" [2a42da62-3a13-41e4-97cf-7a296eda43f7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.852290    8357 system_pods.go:89] "csi-hostpath-resizer-0" [8d3f91a7-5238-4fb5-a3fa-681d9f6ad11e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.852374    8357 system_pods.go:89] "csi-hostpathplugin-ccx6j" [f640eeb2-5e92-48e1-8a21-b7a16f534e95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.852379    8357 system_pods.go:89] "etcd-addons-399511" [3b0b9745-f1c3-4ccb-a159-d01b5748913a] Running
	I0829 18:07:10.852404    8357 system_pods.go:89] "kube-apiserver-addons-399511" [29df537f-ddef-498e-80a6-e551ac8c68df] Running
	I0829 18:07:10.852409    8357 system_pods.go:89] "kube-controller-manager-addons-399511" [ab52d01d-7aa4-4822-882a-c4874d38070c] Running
	I0829 18:07:10.852424    8357 system_pods.go:89] "kube-ingress-dns-minikube" [a3fb70ac-15b5-4566-bc99-dd80b676c940] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.852433    8357 system_pods.go:89] "kube-proxy-whsvz" [3c657ad9-4239-4af9-b152-d4e3d193fb5e] Running
	I0829 18:07:10.852438    8357 system_pods.go:89] "kube-scheduler-addons-399511" [af0c76ac-9eb8-4499-b930-466bf2ef4863] Running
	I0829 18:07:10.852445    8357 system_pods.go:89] "metrics-server-8988944d9-bxwfw" [cfb12d61-e779-43c1-b037-4307e16e276b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.852457    8357 system_pods.go:89] "nvidia-device-plugin-daemonset-dqrjn" [5522a38d-8785-471b-8ec0-3e5d151909ad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.852464    8357 system_pods.go:89] "registry-6fb4cdfc84-dlgj9" [02494a1f-30ad-4cf7-a0d8-5942ff632fdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.852472    8357 system_pods.go:89] "registry-proxy-p65z4" [c30311f9-7059-4c57-b652-d784a14e1d37] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.852479    8357 system_pods.go:89] "snapshot-controller-56fcc65765-gspj6" [3067aad6-a75a-4214-a757-0b08cd4867a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.852487    8357 system_pods.go:89] "snapshot-controller-56fcc65765-vd2rb" [e1869661-3f46-435f-a5c0-24b10a78b266] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.852501    8357 system_pods.go:89] "storage-provisioner" [72464b6f-a748-4e2f-9aeb-d54ba2906bf5] Running
	I0829 18:07:10.852509    8357 system_pods.go:126] duration metric: took 9.844747ms to wait for k8s-apps to be running ...
	I0829 18:07:10.852517    8357 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:07:10.852584    8357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:07:10.979765    8357 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:07:10.979791    8357 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:07:11.091128    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.094261    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.114971    8357 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:11.115037    8357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:07:11.139972    8357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:11.317527    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.597327    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.598004    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.817676    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.094459    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.095863    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.317557    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.524426    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.738345855s)
	I0829 18:07:12.524452    8357 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.671845426s)
	I0829 18:07:12.524530    8357 system_svc.go:56] duration metric: took 1.672008558s WaitForService to wait for kubelet
	I0829 18:07:12.524566    8357 kubeadm.go:582] duration metric: took 16.333172519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:07:12.524604    8357 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:07:12.527975    8357 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0829 18:07:12.528053    8357 node_conditions.go:123] node cpu capacity is 2
	I0829 18:07:12.528079    8357 node_conditions.go:105] duration metric: took 3.458389ms to run NodePressure ...
	I0829 18:07:12.528107    8357 start.go:241] waiting for startup goroutines ...
	I0829 18:07:12.590163    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.592026    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.697278    8357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.557221723s)
	I0829 18:07:12.701166    8357 addons.go:475] Verifying addon gcp-auth=true in "addons-399511"
	I0829 18:07:12.704527    8357 out.go:177] * Verifying gcp-auth addon...
	I0829 18:07:12.707829    8357 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:07:12.712461    8357 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:07:12.817889    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.091062    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.093825    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.316884    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.591910    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.593381    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.817722    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.091995    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.092523    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.317251    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.590741    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.591434    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.817276    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.091709    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.092843    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.316867    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.591273    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.593121    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.816818    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.093840    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.095235    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.317715    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.590698    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.592904    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.820610    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.090204    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.091498    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.316661    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.591808    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.592527    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.816632    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.089630    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.093050    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.317303    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.591151    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.592351    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.816909    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.090022    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.098517    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.316619    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.591476    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.592654    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.818190    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.094317    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.095542    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.317692    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.589481    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.592804    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.816937    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.091575    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.092559    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.317219    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.591507    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.593845    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.817109    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.091104    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.092179    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.316936    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.589910    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.592776    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.817790    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.091922    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.093115    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.316966    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.590182    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.591417    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.817193    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.092261    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.092637    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.317496    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.589381    8357 kapi.go:107] duration metric: took 17.503787398s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:24.591047    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.816529    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.090980    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.321707    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.602057    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.816644    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.091657    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.317543    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.591357    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.817162    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.091476    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.316485    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.591378    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.817461    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.091670    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.317350    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.592253    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.816822    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.092370    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.317240    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.591679    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.816998    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.099253    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.317023    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.591176    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.817075    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.091784    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.316897    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.591953    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.818130    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.093366    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.318493    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.594355    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.818511    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.092663    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.317263    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.591313    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.816936    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.091646    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.317144    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.591657    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.817094    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.091887    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.317092    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.592037    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.817673    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.092272    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.318132    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.591841    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.817324    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.093034    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.317635    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.592260    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.818141    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.099195    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.318307    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.591826    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.818252    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.092732    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.317183    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.598983    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.817027    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.092928    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.317526    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.591944    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.817817    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.092839    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.317188    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.591565    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.819359    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.091770    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.321195    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.590667    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.816691    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.091927    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.317594    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.592582    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.825180    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.092842    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.322152    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.591956    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.819689    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.097209    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.321521    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.593196    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.837569    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.091476    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.320636    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.592697    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.820179    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.091872    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.317402    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.591084    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.817584    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.092608    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.317837    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.593254    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.818508    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.091065    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.317483    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.591388    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.817265    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.092479    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.317915    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.591206    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.817229    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.094883    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.317672    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.622946    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.817887    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.091741    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.316868    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.594263    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.829996    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.090729    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.317043    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.594392    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.816791    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.091559    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.316764    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.595318    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.817164    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.091602    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:55.318053    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.641252    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:55.818085    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.092250    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.318992    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.591360    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.817473    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.092031    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.317429    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.590982    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.816999    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.091961    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.317308    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.592874    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.818083    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.090888    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.317482    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.591560    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.818444    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.108472    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.327120    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.621091    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.824995    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.094350    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.317356    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.592331    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.818436    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.095293    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.317855    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.591757    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.818995    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.090979    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.316838    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.591655    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.817480    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.091452    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.317006    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.609178    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.817634    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.092006    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.317270    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.594427    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.817136    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.092529    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.316787    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.591331    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.817057    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.091526    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.321946    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.592340    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.817712    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.091315    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.316699    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.592700    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.817278    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.091530    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.324821    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.591960    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.817496    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.093761    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.316949    8357 kapi.go:107] duration metric: took 59.504807753s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:08:10.591909    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.094082    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.591630    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.091121    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.592553    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.092467    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.590967    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.092349    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.592364    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.093977    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.592070    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.092416    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.591564    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.092907    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.591450    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:18.091661    8357 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:18.596368    8357 kapi.go:107] duration metric: took 1m11.509595431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:08:35.724961    8357 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:08:35.724983    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:36.212687    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:36.711765    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:37.212240    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:37.712598    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.211199    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.712171    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.212460    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.711530    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.212435    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.711300    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.212412    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.711404    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.211737    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.711983    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.211773    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.711960    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.212057    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.712109    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.212506    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.711009    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.212432    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.711556    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.212208    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.712417    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.211991    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.711887    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.211688    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.711765    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.213781    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.712505    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.212059    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.711258    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.211310    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.711435    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.211726    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.711854    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.211448    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.711738    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.211375    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.711867    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.211678    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.711469    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.211471    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.712177    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.210966    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.712009    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.212565    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.711862    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.329807    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.711049    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.211695    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.711461    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.210951    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.712075    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.212393    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.711940    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.211264    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.712914    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.211451    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.711219    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.212288    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.711794    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.212821    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.712365    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.211253    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.715708    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.211786    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.712240    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.212072    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.711816    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.212162    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.712173    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:12.211971    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:12.711718    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:13.212074    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:13.711842    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:14.211707    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:14.711975    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:15.212442    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:15.719317    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:16.211752    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:16.712764    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:17.211429    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:17.712081    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:18.211667    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:18.711441    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:19.211341    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:19.712115    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:20.212360    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:20.711384    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:21.212476    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:21.711653    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:22.211023    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:22.711794    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:23.211881    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:23.711535    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:24.211152    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:24.712189    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:25.212517    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:25.711663    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:26.212255    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:26.711588    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:27.211651    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:27.711799    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:28.211205    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:28.711024    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:29.211734    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:29.711616    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:30.213109    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:30.711833    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:31.212611    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:31.711280    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:32.211686    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:32.712096    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:33.212244    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:33.711667    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:34.211204    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:34.712441    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:35.211536    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:35.712140    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:36.212347    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:36.711409    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:37.212160    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:37.711121    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:38.211600    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:38.711589    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:39.211724    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:39.710930    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:40.212713    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:40.711466    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:41.212431    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:41.711746    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:42.212676    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:42.712405    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:43.212735    8357 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:43.711572    8357 kapi.go:107] duration metric: took 2m31.00373825s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:09:43.713377    8357 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-399511 cluster.
	I0829 18:09:43.715136    8357 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:09:43.716873    8357 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:09:43.718964    8357 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, volcano, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0829 18:09:43.720657    8357 addons.go:510] duration metric: took 2m47.528907996s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns default-storageclass volcano metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0829 18:09:43.720721    8357 start.go:246] waiting for cluster config update ...
	I0829 18:09:43.720759    8357 start.go:255] writing updated cluster config ...
	I0829 18:09:43.721059    8357 ssh_runner.go:195] Run: rm -f paused
	I0829 18:09:44.106006    8357 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:09:44.107798    8357 out.go:177] * Done! kubectl is now configured to use "addons-399511" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 29 18:19:09 addons-399511 dockerd[1277]: time="2024-08-29T18:19:09.001170196Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 29 18:19:09 addons-399511 dockerd[1277]: time="2024-08-29T18:19:09.004120522Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 29 18:19:09 addons-399511 dockerd[1277]: time="2024-08-29T18:19:09.007034065Z" level=error msg="Error running exec b37dc7ba8833743b174ec7b5ebce44d066edf98008c003a45a2819059989dad9 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 29 18:19:09 addons-399511 dockerd[1277]: time="2024-08-29T18:19:09.098292422Z" level=info msg="ignoring event" container=08f8b9945e0fa36dcbdbdc98bcfc8444cd62fa0d58418da76d049117ddf8c6aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:12 addons-399511 dockerd[1277]: time="2024-08-29T18:19:12.621472530Z" level=info msg="ignoring event" container=87eb4fc3991d1d6a4be2738baae3b76f177900f4ff9014af31ddaa7ce0f48baf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:12 addons-399511 dockerd[1277]: time="2024-08-29T18:19:12.637973718Z" level=info msg="ignoring event" container=7246787be5fccdd3280905a3f147947e054c2aad906b9ecf1e90c711c5b7c411 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:12 addons-399511 dockerd[1277]: time="2024-08-29T18:19:12.809452423Z" level=info msg="ignoring event" container=172ab746a6c0eb2b4db7101c4780d9fa9e7ddbc8e70d2599a0c98d01a63821c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:12 addons-399511 dockerd[1277]: time="2024-08-29T18:19:12.854401429Z" level=info msg="ignoring event" container=8803d8bce398208a5ad13d9ce43600c476ac94687e1831b817785bc03ba4d7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:19 addons-399511 dockerd[1277]: time="2024-08-29T18:19:19.600558050Z" level=info msg="ignoring event" container=cb6bfe25f592f5c5f0661df915e15f596cca6f92348eef4909111a7572776d36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:19 addons-399511 dockerd[1277]: time="2024-08-29T18:19:19.726040498Z" level=info msg="ignoring event" container=6a1720751aada4b7c1e87d27f09ec0952bcfdd83b93de13f48b3d1873790b0a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:25 addons-399511 dockerd[1277]: time="2024-08-29T18:19:25.034559908Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:19:25 addons-399511 dockerd[1277]: time="2024-08-29T18:19:25.038432688Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:19:25 addons-399511 dockerd[1277]: time="2024-08-29T18:19:25.280394046Z" level=info msg="ignoring event" container=29f479ab45ef54df56c35e76c00847ba1ebff2daf5a5b7ce95270565059f567d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:31 addons-399511 cri-dockerd[1535]: time="2024-08-29T18:19:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cb979eff7e8a53146a34dffc9bded595842cdbe6cd353a42d3d2b122d7d5c385/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 29 18:19:33 addons-399511 cri-dockerd[1535]: time="2024-08-29T18:19:33Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Aug 29 18:19:41 addons-399511 dockerd[1277]: time="2024-08-29T18:19:41.550823847Z" level=info msg="ignoring event" container=a40925e291b399849e54dc14a172adf519ea3ba3cea1ec0e708aca03a243795a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:41 addons-399511 cri-dockerd[1535]: time="2024-08-29T18:19:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c38d9bb0a3a70f41ce6ff0dd23f73119bd469b05153b69a639e6f1fe6308b05a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 29 18:19:42 addons-399511 dockerd[1277]: time="2024-08-29T18:19:42.244777081Z" level=info msg="ignoring event" container=6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:42 addons-399511 dockerd[1277]: time="2024-08-29T18:19:42.365006878Z" level=info msg="ignoring event" container=885c7cc5edab1af13b344eda161b44b6ffb5b0654524f95685ded3a6aad6b9a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:42 addons-399511 dockerd[1277]: time="2024-08-29T18:19:42.463995218Z" level=info msg="ignoring event" container=af8bcc3605d016e825b3c540eddbfb7e9ea03a06e3d64633982db1304645da6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:42 addons-399511 dockerd[1277]: time="2024-08-29T18:19:42.539926221Z" level=info msg="ignoring event" container=50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:42 addons-399511 cri-dockerd[1535]: time="2024-08-29T18:19:42Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-6fb4cdfc84-dlgj9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Aug 29 18:19:42 addons-399511 dockerd[1277]: time="2024-08-29T18:19:42.819010925Z" level=info msg="ignoring event" container=f45dc59cb1818f98e3ebe4fdd5e2baf584a57d1fc1236dc97bd1745c59cc99b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:43 addons-399511 dockerd[1277]: time="2024-08-29T18:19:43.033449483Z" level=info msg="ignoring event" container=0ef61e28f8aadf9a3af18a3a84e2b81a77cbf8e9572769ebc528ab9c35912899 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:43 addons-399511 cri-dockerd[1535]: time="2024-08-29T18:19:43Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	62332741fee7a       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  1 second ago        Running             hello-world-app            0                   c38d9bb0a3a70       hello-world-app-55bf9c44b4-7dxmc
	2c8828ecc26e3       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                11 seconds ago      Running             nginx                      0                   cb979eff7e8a5       nginx
	c7ec0c5a7f067       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   e563b15fdc548       gcp-auth-89d5ffd79-l4wql
	32ea3665ab843       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   39d0aa81ec3a3       ingress-nginx-controller-bc57996ff-vtprg
	2ffef2c167833       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   9b7db7501fd3f       ingress-nginx-admission-patch-rt6pg
	622b234efa7d1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   25509eeea922c       ingress-nginx-admission-create-7x2sk
	68a4fd1db6665       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   1e95e32d5c8ee       yakd-dashboard-67d98fc6b-jm8hb
	94feb0b93e25d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   58d3ff69588b2       local-path-provisioner-86d989889c-4kwnl
	5b8a7ba3037da       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   0d59145cb3ee9       nvidia-device-plugin-daemonset-dqrjn
	9c3743ecbba24       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   45af828e9f452       cloud-spanner-emulator-769b77f747-6ck78
	f3ff3f665f1bb       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   14310089c709a       storage-provisioner
	7aa33475239de       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                    0                   ea6afe50d8952       coredns-6f6b679f8f-7tnxm
	bd2a027b29eb9       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                 0                   f5e98740a32a6       kube-proxy-whsvz
	299f75e778ae0       fcb0683e6bdbd                                                                                                                13 minutes ago      Running             kube-controller-manager    0                   2fbed51714c18       kube-controller-manager-addons-399511
	dcc0db9a858e8       fbbbd428abb4d                                                                                                                13 minutes ago      Running             kube-scheduler             0                   b10f7f59c7f1e       kube-scheduler-addons-399511
	ee64bc7ef548c       27e3830e14027                                                                                                                13 minutes ago      Running             etcd                       0                   6e49aa5efa929       etcd-addons-399511
	c684a4c457529       cd0f0ae0ec9e0                                                                                                                13 minutes ago      Running             kube-apiserver             0                   d68245bef5626       kube-apiserver-addons-399511
	
	
	==> controller_ingress [32ea3665ab84] <==
	10.244.0.1 - - [29/Aug/2024:18:19:40 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 81 0.001 [default-nginx-80] [] 10.244.0.31:80 615 0.001 200 cba2abba9a10c6ee8140cbb680943f08
	W0829 18:19:30.891686       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0829 18:19:30.891817       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0829 18:19:30.892041       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"d676f058-6235-40dc-af80-0cc6579f3f07", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2789", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0829 18:19:30.948489       7 controller.go:213] "Backend successfully reloaded"
	I0829 18:19:30.949252       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vtprg", UID:"3d5790ed-2844-4680-9297-2df01170d070", APIVersion:"v1", ResourceVersion:"1266", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0829 18:19:34.225967       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0829 18:19:34.226080       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0829 18:19:34.265855       7 controller.go:213] "Backend successfully reloaded"
	I0829 18:19:34.266333       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vtprg", UID:"3d5790ed-2844-4680-9297-2df01170d070", APIVersion:"v1", ResourceVersion:"1266", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0829 18:19:40.724844       7 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0829 18:19:40.766860       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.042s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.042s testedConfigurationSize:26.2kB}
	I0829 18:19:40.767114       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0829 18:19:40.785854       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0829 18:19:40.786968       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"ed06a7dd-aef9-4aff-afbe-1aa215bcefd1", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2834", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0829 18:19:40.892190       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0829 18:19:40.953116       7 controller.go:213] "Backend successfully reloaded"
	I0829 18:19:40.954749       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vtprg", UID:"3d5790ed-2844-4680-9297-2df01170d070", APIVersion:"v1", ResourceVersion:"1266", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0829 18:19:43.849112       7 sigterm.go:36] "Received SIGTERM, shutting down"
	I0829 18:19:43.849145       7 nginx.go:393] "Shutting down controller queues"
	E0829 18:19:43.850788       7 status.go:120] "error obtaining running IP address" err="pods is forbidden: User \"system:serviceaccount:ingress-nginx:ingress-nginx\" cannot list resource \"pods\" in API group \"\" in the namespace \"ingress-nginx\""
	I0829 18:19:43.850806       7 nginx.go:401] "Stopping admission controller"
	E0829 18:19:43.850850       7 nginx.go:340] "Error listening for TLS connections" err="http: Server closed"
	I0829 18:19:43.850937       7 nginx.go:409] "Stopping NGINX process"
	2024/08/29 18:19:43 [notice] 315#315: signal process started
	
	
	==> coredns [7aa33475239d] <==
	[INFO] 10.244.0.21:38089 - 60809 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027232s
	[INFO] 10.244.0.21:38089 - 8558 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002220525s
	[INFO] 10.244.0.21:38089 - 458 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.008986943s
	[INFO] 10.244.0.21:37277 - 61292 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.011013411s
	[INFO] 10.244.0.21:58157 - 45725 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082247s
	[INFO] 10.244.0.21:55102 - 31170 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080179s
	[INFO] 10.244.0.21:58157 - 46160 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059962s
	[INFO] 10.244.0.21:38089 - 5271 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071498s
	[INFO] 10.244.0.21:55102 - 26924 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075995s
	[INFO] 10.244.0.21:58157 - 38391 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053481s
	[INFO] 10.244.0.21:37277 - 30315 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001403448s
	[INFO] 10.244.0.21:58157 - 26795 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099428s
	[INFO] 10.244.0.21:55102 - 61361 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035339s
	[INFO] 10.244.0.21:58157 - 34757 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000563051s
	[INFO] 10.244.0.21:55102 - 33577 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093643s
	[INFO] 10.244.0.21:37277 - 23053 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077201s
	[INFO] 10.244.0.21:55102 - 39106 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077726s
	[INFO] 10.244.0.21:58157 - 55269 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081804s
	[INFO] 10.244.0.21:55102 - 63847 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063408s
	[INFO] 10.244.0.21:58157 - 17268 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000858931s
	[INFO] 10.244.0.21:55102 - 61567 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001907398s
	[INFO] 10.244.0.21:58157 - 10073 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006138753s
	[INFO] 10.244.0.21:55102 - 23147 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002500323s
	[INFO] 10.244.0.21:58157 - 1855 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070596s
	[INFO] 10.244.0.21:55102 - 13703 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081377s
	
	
	==> describe nodes <==
	Name:               addons-399511
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-399511
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-399511
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-399511
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-399511
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:19:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:15:30 +0000   Thu, 29 Aug 2024 18:06:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:15:30 +0000   Thu, 29 Aug 2024 18:06:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:15:30 +0000   Thu, 29 Aug 2024 18:06:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:15:30 +0000   Thu, 29 Aug 2024 18:06:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-399511
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc1e0345738c42ad8c433046207a2fff
	  System UUID:                d93b94ba-2fa6-40ec-9fe2-1337b7ad416e
	  Boot ID:                    8a292db4-13de-4eac-93d3-33bf90b49951
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-6ck78    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-7dxmc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  gcp-auth                    gcp-auth-89d5ffd79-l4wql                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-7tnxm                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-399511                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-399511               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-399511      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-whsvz                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-399511               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-dqrjn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-4kwnl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jm8hb             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (3%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-399511 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet          Node addons-399511 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-399511 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-399511 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-399511 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-399511 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-399511 event: Registered Node addons-399511 in Controller
	
	
	==> dmesg <==
	[Aug29 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014928] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.488358] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.064179] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002815] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.018415] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004787] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003936] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.700067] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.379221] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [ee64bc7ef548] <==
	{"level":"info","ts":"2024-08-29T18:06:44.475285Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T18:06:44.483101Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T18:06:45.112351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T18:06:45.112629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T18:06:45.112855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-29T18:06:45.113048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T18:06:45.113170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-29T18:06:45.113357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-29T18:06:45.113512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-29T18:06:45.116481Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:45.120569Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-399511 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T18:06:45.120798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:06:45.121474Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:45.121748Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:45.121937Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:45.124359Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:06:45.128380Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:06:45.132810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-29T18:06:45.134038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:06:45.149913Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T18:06:45.148384Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:06:45.152620Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T18:16:45.658577Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1876}
	{"level":"info","ts":"2024-08-29T18:16:45.720641Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1876,"took":"61.17103ms","hash":4158474444,"current-db-size-bytes":8966144,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4927488,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-08-29T18:16:45.720689Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4158474444,"revision":1876,"compact-revision":-1}
	
	
	==> gcp-auth [c7ec0c5a7f06] <==
	2024/08/29 18:09:42 GCP Auth Webhook started!
	2024/08/29 18:10:01 Ready to marshal response ...
	2024/08/29 18:10:01 Ready to write response ...
	2024/08/29 18:10:02 Ready to marshal response ...
	2024/08/29 18:10:02 Ready to write response ...
	2024/08/29 18:10:27 Ready to marshal response ...
	2024/08/29 18:10:27 Ready to write response ...
	2024/08/29 18:10:27 Ready to marshal response ...
	2024/08/29 18:10:27 Ready to write response ...
	2024/08/29 18:10:27 Ready to marshal response ...
	2024/08/29 18:10:27 Ready to write response ...
	2024/08/29 18:18:41 Ready to marshal response ...
	2024/08/29 18:18:41 Ready to write response ...
	2024/08/29 18:18:42 Ready to marshal response ...
	2024/08/29 18:18:42 Ready to write response ...
	2024/08/29 18:18:56 Ready to marshal response ...
	2024/08/29 18:18:56 Ready to write response ...
	2024/08/29 18:19:31 Ready to marshal response ...
	2024/08/29 18:19:31 Ready to write response ...
	2024/08/29 18:19:40 Ready to marshal response ...
	2024/08/29 18:19:40 Ready to write response ...
	
	
	==> kernel <==
	 18:19:44 up  1:02,  0 users,  load average: 1.98, 1.06, 0.93
	Linux addons-399511 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [c684a4c45752] <==
	W0829 18:10:19.253104       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0829 18:10:19.288253       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0829 18:10:19.315272       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0829 18:10:19.821598       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0829 18:10:19.937813       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0829 18:18:50.601048       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0829 18:19:06.001397       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0829 18:19:12.406912       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:12.406957       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:19:12.435806       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:12.436037       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:19:12.445640       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:12.445902       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:19:12.467996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:12.468085       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:19:12.513566       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:12.513627       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:19:13.446744       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:19:13.513990       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0829 18:19:13.592977       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0829 18:19:25.189882       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:19:26.229293       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:19:30.881617       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:19:31.230221       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.49.134"}
	I0829 18:19:40.994954       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.113.97"}
	
	
	==> kube-controller-manager [299f75e778ae] <==
	E0829 18:19:27.215673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:27.700232       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:27.700278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:29.799467       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:29.799506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:33.420771       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:33.420822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:34.612628       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:34.612681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:19:35.415664       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0829 18:19:35.628104       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:35.628163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:19:40.723895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.933864ms"
	I0829 18:19:40.737858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.883353ms"
	I0829 18:19:40.738552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="95.325µs"
	I0829 18:19:40.746835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="148.724µs"
	I0829 18:19:40.763152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="63.039µs"
	W0829 18:19:40.881500       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:40.881648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:19:42.361229       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="4.02µs"
	I0829 18:19:43.765275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="3.93µs"
	I0829 18:19:43.769192       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0829 18:19:43.779760       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0829 18:19:43.930741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.995902ms"
	I0829 18:19:43.930910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="130.27µs"
	
	
	==> kube-proxy [bd2a027b29eb] <==
	I0829 18:06:57.188898       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:06:57.302637       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0829 18:06:57.302720       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:57.336681       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:06:57.336733       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:57.346603       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:57.352801       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:57.352826       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:57.365261       1 config.go:197] "Starting service config controller"
	I0829 18:06:57.365306       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:57.365331       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:57.365336       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:57.365347       1 config.go:326] "Starting node config controller"
	I0829 18:06:57.365352       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:57.466285       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:06:57.466334       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:57.466357       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [dcc0db9a858e] <==
	E0829 18:06:48.205224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:48.204725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:48.205645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:48.204760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:48.205943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:48.203811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:48.206160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:48.206295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.053169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:49.053447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.064690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 18:06:49.064732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.090683       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:49.090927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.119625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:06:49.119720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.124515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:06:49.124593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.256598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:49.256857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.332131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:49.332185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.585828       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:49.586106       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:06:51.691255       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:19:41 addons-399511 kubelet[2337]: I0829 18:19:41.870888    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0cf9d65c-bc35-4b58-a01d-86ea8577a57a-gcp-creds\") pod \"0cf9d65c-bc35-4b58-a01d-86ea8577a57a\" (UID: \"0cf9d65c-bc35-4b58-a01d-86ea8577a57a\") "
	Aug 29 18:19:41 addons-399511 kubelet[2337]: I0829 18:19:41.870949    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd99w\" (UniqueName: \"kubernetes.io/projected/0cf9d65c-bc35-4b58-a01d-86ea8577a57a-kube-api-access-nd99w\") pod \"0cf9d65c-bc35-4b58-a01d-86ea8577a57a\" (UID: \"0cf9d65c-bc35-4b58-a01d-86ea8577a57a\") "
	Aug 29 18:19:41 addons-399511 kubelet[2337]: I0829 18:19:41.871368    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cf9d65c-bc35-4b58-a01d-86ea8577a57a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0cf9d65c-bc35-4b58-a01d-86ea8577a57a" (UID: "0cf9d65c-bc35-4b58-a01d-86ea8577a57a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 18:19:41 addons-399511 kubelet[2337]: I0829 18:19:41.873617    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf9d65c-bc35-4b58-a01d-86ea8577a57a-kube-api-access-nd99w" (OuterVolumeSpecName: "kube-api-access-nd99w") pod "0cf9d65c-bc35-4b58-a01d-86ea8577a57a" (UID: "0cf9d65c-bc35-4b58-a01d-86ea8577a57a"). InnerVolumeSpecName "kube-api-access-nd99w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:41 addons-399511 kubelet[2337]: I0829 18:19:41.971810    2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nd99w\" (UniqueName: \"kubernetes.io/projected/0cf9d65c-bc35-4b58-a01d-86ea8577a57a-kube-api-access-nd99w\") on node \"addons-399511\" DevicePath \"\""
	Aug 29 18:19:41 addons-399511 kubelet[2337]: I0829 18:19:41.971845    2337 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0cf9d65c-bc35-4b58-a01d-86ea8577a57a-gcp-creds\") on node \"addons-399511\" DevicePath \"\""
	Aug 29 18:19:42 addons-399511 kubelet[2337]: I0829 18:19:42.681613    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h625\" (UniqueName: \"kubernetes.io/projected/a3fb70ac-15b5-4566-bc99-dd80b676c940-kube-api-access-9h625\") pod \"a3fb70ac-15b5-4566-bc99-dd80b676c940\" (UID: \"a3fb70ac-15b5-4566-bc99-dd80b676c940\") "
	Aug 29 18:19:42 addons-399511 kubelet[2337]: I0829 18:19:42.686339    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3fb70ac-15b5-4566-bc99-dd80b676c940-kube-api-access-9h625" (OuterVolumeSpecName: "kube-api-access-9h625") pod "a3fb70ac-15b5-4566-bc99-dd80b676c940" (UID: "a3fb70ac-15b5-4566-bc99-dd80b676c940"). InnerVolumeSpecName "kube-api-access-9h625". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:42 addons-399511 kubelet[2337]: I0829 18:19:42.784026    2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9h625\" (UniqueName: \"kubernetes.io/projected/a3fb70ac-15b5-4566-bc99-dd80b676c940-kube-api-access-9h625\") on node \"addons-399511\" DevicePath \"\""
	Aug 29 18:19:42 addons-399511 kubelet[2337]: I0829 18:19:42.853871    2337 scope.go:117] "RemoveContainer" containerID="6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2"
	Aug 29 18:19:42 addons-399511 kubelet[2337]: I0829 18:19:42.998914    2337 scope.go:117] "RemoveContainer" containerID="6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2"
	Aug 29 18:19:43 addons-399511 kubelet[2337]: E0829 18:19:43.001177    2337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2" containerID="6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2"
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.001220    2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2"} err="failed to get container status \"6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6a479a02da06e6c12be5ed530290a27c47a973c350d08c8b3dd9798ef075ada2"
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.224338    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xnrj\" (UniqueName: \"kubernetes.io/projected/02494a1f-30ad-4cf7-a0d8-5942ff632fdb-kube-api-access-2xnrj\") pod \"02494a1f-30ad-4cf7-a0d8-5942ff632fdb\" (UID: \"02494a1f-30ad-4cf7-a0d8-5942ff632fdb\") "
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.227271    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02494a1f-30ad-4cf7-a0d8-5942ff632fdb-kube-api-access-2xnrj" (OuterVolumeSpecName: "kube-api-access-2xnrj") pod "02494a1f-30ad-4cf7-a0d8-5942ff632fdb" (UID: "02494a1f-30ad-4cf7-a0d8-5942ff632fdb"). InnerVolumeSpecName "kube-api-access-2xnrj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.325376    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7flmx\" (UniqueName: \"kubernetes.io/projected/c30311f9-7059-4c57-b652-d784a14e1d37-kube-api-access-7flmx\") pod \"c30311f9-7059-4c57-b652-d784a14e1d37\" (UID: \"c30311f9-7059-4c57-b652-d784a14e1d37\") "
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.325479    2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2xnrj\" (UniqueName: \"kubernetes.io/projected/02494a1f-30ad-4cf7-a0d8-5942ff632fdb-kube-api-access-2xnrj\") on node \"addons-399511\" DevicePath \"\""
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.327579    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30311f9-7059-4c57-b652-d784a14e1d37-kube-api-access-7flmx" (OuterVolumeSpecName: "kube-api-access-7flmx") pod "c30311f9-7059-4c57-b652-d784a14e1d37" (UID: "c30311f9-7059-4c57-b652-d784a14e1d37"). InnerVolumeSpecName "kube-api-access-7flmx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.426027    2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7flmx\" (UniqueName: \"kubernetes.io/projected/c30311f9-7059-4c57-b652-d784a14e1d37-kube-api-access-7flmx\") on node \"addons-399511\" DevicePath \"\""
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.943638    2337 scope.go:117] "RemoveContainer" containerID="50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26"
	Aug 29 18:19:43 addons-399511 kubelet[2337]: I0829 18:19:43.975648    2337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-7dxmc" podStartSLOduration=2.717067239 podStartE2EDuration="3.975627944s" podCreationTimestamp="2024-08-29 18:19:40 +0000 UTC" firstStartedPulling="2024-08-29 18:19:41.838984665 +0000 UTC m=+771.251049690" lastFinishedPulling="2024-08-29 18:19:43.09754537 +0000 UTC m=+772.509610395" observedRunningTime="2024-08-29 18:19:43.925450241 +0000 UTC m=+773.337515265" watchObservedRunningTime="2024-08-29 18:19:43.975627944 +0000 UTC m=+773.387692969"
	Aug 29 18:19:44 addons-399511 kubelet[2337]: I0829 18:19:44.009719    2337 scope.go:117] "RemoveContainer" containerID="50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26"
	Aug 29 18:19:44 addons-399511 kubelet[2337]: E0829 18:19:44.026803    2337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26" containerID="50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26"
	Aug 29 18:19:44 addons-399511 kubelet[2337]: I0829 18:19:44.027012    2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26"} err="failed to get container status \"50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26\": rpc error: code = Unknown desc = Error response from daemon: No such container: 50c88541427fd1c0868a3129e18a2d778bced5c10377ef74c68eb11c3c9f3f26"
	Aug 29 18:19:44 addons-399511 kubelet[2337]: I0829 18:19:44.036595    2337 scope.go:117] "RemoveContainer" containerID="af8bcc3605d016e825b3c540eddbfb7e9ea03a06e3d64633982db1304645da6a"
	
	
	==> storage-provisioner [f3ff3f665f1b] <==
	I0829 18:07:03.782592       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:07:03.800937       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:07:03.800989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:07:03.810155       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:07:03.812235       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-399511_327f24b8-372e-4221-9e3b-8677e59e6394!
	I0829 18:07:03.819628       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95cddafd-f17e-4950-a682-bb4cbc259b79", APIVersion:"v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-399511_327f24b8-372e-4221-9e3b-8677e59e6394 became leader
	I0829 18:07:03.913016       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-399511_327f24b8-372e-4221-9e3b-8677e59e6394!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-399511 -n addons-399511
helpers_test.go:261: (dbg) Run:  kubectl --context addons-399511 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-399511 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-399511 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-399511/192.168.49.2
	Start Time:       Thu, 29 Aug 2024 18:10:27 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqknm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qqknm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-399511
	  Normal   Pulling    7m43s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (74.53s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 4.79
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.22
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.94
22 TestOffline 57.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 223.1
29 TestAddons/serial/Volcano 43.33
31 TestAddons/serial/GCPAuth/Namespaces 0.18
34 TestAddons/parallel/Ingress 20.38
35 TestAddons/parallel/InspektorGadget 11.87
36 TestAddons/parallel/MetricsServer 5.83
39 TestAddons/parallel/CSI 41.75
40 TestAddons/parallel/Headlamp 16.72
41 TestAddons/parallel/CloudSpanner 6.5
42 TestAddons/parallel/LocalPath 53.53
43 TestAddons/parallel/NvidiaDevicePlugin 6.45
44 TestAddons/parallel/Yakd 11.06
45 TestAddons/StoppedEnableDisable 6
46 TestCertOptions 46.72
47 TestCertExpiration 256.5
48 TestDockerFlags 43.43
49 TestForceSystemdFlag 43.7
50 TestForceSystemdEnv 40.49
56 TestErrorSpam/setup 32.35
57 TestErrorSpam/start 0.73
58 TestErrorSpam/status 1.01
59 TestErrorSpam/pause 1.35
60 TestErrorSpam/unpause 1.47
61 TestErrorSpam/stop 10.98
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 76.86
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 31.33
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.93
73 TestFunctional/serial/CacheCmd/cache/add_local 1.41
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 44.42
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.19
84 TestFunctional/serial/LogsFileCmd 1.3
85 TestFunctional/serial/InvalidService 4.88
87 TestFunctional/parallel/ConfigCmd 0.52
88 TestFunctional/parallel/DashboardCmd 14.11
89 TestFunctional/parallel/DryRun 1.06
90 TestFunctional/parallel/InternationalLanguage 0.26
91 TestFunctional/parallel/StatusCmd 1.18
95 TestFunctional/parallel/ServiceCmdConnect 10.74
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 28.2
99 TestFunctional/parallel/SSHCmd 0.72
100 TestFunctional/parallel/CpCmd 2.53
102 TestFunctional/parallel/FileSync 0.3
103 TestFunctional/parallel/CertSync 2.16
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
111 TestFunctional/parallel/License 0.2
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.5
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
124 TestFunctional/parallel/ServiceCmd/List 0.52
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
128 TestFunctional/parallel/ProfileCmd/profile_list 0.55
129 TestFunctional/parallel/ServiceCmd/Format 0.49
130 TestFunctional/parallel/ServiceCmd/URL 0.5
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
132 TestFunctional/parallel/MountCmd/any-port 9.33
133 TestFunctional/parallel/MountCmd/specific-port 2.38
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.42
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.19
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.35
142 TestFunctional/parallel/ImageCommands/Setup 1.23
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
147 TestFunctional/parallel/DockerEnv/bash 1.44
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 128.37
161 TestMultiControlPlane/serial/DeployApp 8.25
162 TestMultiControlPlane/serial/PingHostFromPods 1.66
163 TestMultiControlPlane/serial/AddWorkerNode 26.87
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
166 TestMultiControlPlane/serial/CopyFile 19.91
167 TestMultiControlPlane/serial/StopSecondaryNode 11.63
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
169 TestMultiControlPlane/serial/RestartSecondaryNode 32.13
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.25
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 254.08
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.89
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 32.93
175 TestMultiControlPlane/serial/RestartCluster 158.46
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
177 TestMultiControlPlane/serial/AddSecondaryNode 47.37
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestImageBuild/serial/Setup 34.11
182 TestImageBuild/serial/NormalBuild 2.07
183 TestImageBuild/serial/BuildWithBuildArg 1.01
184 TestImageBuild/serial/BuildWithDockerIgnore 0.89
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.78
189 TestJSONOutput/start/Command 74.87
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.59
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.49
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.72
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 35.2
215 TestKicCustomNetwork/use_default_bridge_network 34.36
216 TestKicExistingNetwork 34.84
217 TestKicCustomSubnet 35.63
218 TestKicStaticIP 34.44
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 73.63
223 TestMountStart/serial/StartWithMountFirst 10.3
224 TestMountStart/serial/VerifyMountFirst 0.25
225 TestMountStart/serial/StartWithMountSecond 11.63
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.47
228 TestMountStart/serial/VerifyMountPostDelete 0.28
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 8.7
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 85.32
235 TestMultiNode/serial/DeployApp2Nodes 49.71
236 TestMultiNode/serial/PingHostFrom2Pods 1.03
237 TestMultiNode/serial/AddNode 16.92
238 TestMultiNode/serial/MultiNodeLabels 0.12
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 10.18
241 TestMultiNode/serial/StopNode 2.24
242 TestMultiNode/serial/StartAfterStop 10.92
243 TestMultiNode/serial/RestartKeepsNodes 104.86
244 TestMultiNode/serial/DeleteNode 5.72
245 TestMultiNode/serial/StopMultiNode 21.55
246 TestMultiNode/serial/RestartMultiNode 54.75
247 TestMultiNode/serial/ValidateNameConflict 35.15
252 TestPreload 142.89
254 TestScheduledStopUnix 105.38
255 TestSkaffold 148.21
257 TestInsufficientStorage 11.3
258 TestRunningBinaryUpgrade 94.41
260 TestKubernetesUpgrade 387.91
261 TestMissingContainerUpgrade 184.9
263 TestPause/serial/Start 56.02
264 TestPause/serial/SecondStartNoReconfiguration 37.71
265 TestPause/serial/Pause 1.2
266 TestPause/serial/VerifyStatus 0.56
267 TestPause/serial/Unpause 1.21
268 TestPause/serial/PauseAgain 1.06
269 TestPause/serial/DeletePaused 2.26
270 TestPause/serial/VerifyDeletedResources 0.12
271 TestStoppedBinaryUpgrade/Setup 0.91
272 TestStoppedBinaryUpgrade/Upgrade 84.79
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.45
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
283 TestNoKubernetes/serial/StartWithK8s 45.66
295 TestNoKubernetes/serial/StartWithStopK8s 15.09
296 TestNoKubernetes/serial/Start 11
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
298 TestNoKubernetes/serial/ProfileList 0.69
299 TestNoKubernetes/serial/Stop 1.26
300 TestNoKubernetes/serial/StartNoArgs 8.73
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
303 TestStartStop/group/old-k8s-version/serial/FirstStart 172.43
304 TestStartStop/group/old-k8s-version/serial/DeployApp 10.66
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
306 TestStartStop/group/old-k8s-version/serial/Stop 11.16
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
308 TestStartStop/group/old-k8s-version/serial/SecondStart 149.05
310 TestStartStop/group/no-preload/serial/FirstStart 58.98
311 TestStartStop/group/no-preload/serial/DeployApp 10.36
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
313 TestStartStop/group/no-preload/serial/Stop 11.07
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
315 TestStartStop/group/no-preload/serial/SecondStart 267.82
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
319 TestStartStop/group/old-k8s-version/serial/Pause 2.87
321 TestStartStop/group/embed-certs/serial/FirstStart 70.9
322 TestStartStop/group/embed-certs/serial/DeployApp 9.36
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
324 TestStartStop/group/embed-certs/serial/Stop 10.8
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
326 TestStartStop/group/embed-certs/serial/SecondStart 268.8
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
330 TestStartStop/group/no-preload/serial/Pause 2.97
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.27
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.56
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.27
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.86
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 270.42
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
341 TestStartStop/group/embed-certs/serial/Pause 2.88
343 TestStartStop/group/newest-cni/serial/FirstStart 38.75
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.23
346 TestStartStop/group/newest-cni/serial/Stop 5.8
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
348 TestStartStop/group/newest-cni/serial/SecondStart 19.2
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
352 TestStartStop/group/newest-cni/serial/Pause 3.37
353 TestNetworkPlugins/group/auto/Start 77.19
354 TestNetworkPlugins/group/auto/KubeletFlags 0.31
355 TestNetworkPlugins/group/auto/NetCatPod 10.3
356 TestNetworkPlugins/group/auto/DNS 0.21
357 TestNetworkPlugins/group/auto/Localhost 0.18
358 TestNetworkPlugins/group/auto/HairPin 0.17
359 TestNetworkPlugins/group/kindnet/Start 68.73
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.18
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/Start 75.5
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
367 TestNetworkPlugins/group/kindnet/NetCatPod 12.4
368 TestNetworkPlugins/group/kindnet/DNS 0.27
369 TestNetworkPlugins/group/kindnet/Localhost 0.2
370 TestNetworkPlugins/group/kindnet/HairPin 0.24
371 TestNetworkPlugins/group/custom-flannel/Start 64.64
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.37
374 TestNetworkPlugins/group/calico/NetCatPod 12.35
375 TestNetworkPlugins/group/calico/DNS 0.31
376 TestNetworkPlugins/group/calico/Localhost 0.27
377 TestNetworkPlugins/group/calico/HairPin 0.26
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.41
380 TestNetworkPlugins/group/false/Start 60.11
381 TestNetworkPlugins/group/custom-flannel/DNS 0.38
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
384 TestNetworkPlugins/group/enable-default-cni/Start 52.68
385 TestNetworkPlugins/group/false/KubeletFlags 0.36
386 TestNetworkPlugins/group/false/NetCatPod 12.41
387 TestNetworkPlugins/group/false/DNS 0.35
388 TestNetworkPlugins/group/false/Localhost 0.24
389 TestNetworkPlugins/group/false/HairPin 0.17
390 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
391 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.43
392 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
393 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
394 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
395 TestNetworkPlugins/group/flannel/Start 62.02
396 TestNetworkPlugins/group/bridge/Start 88.06
397 TestNetworkPlugins/group/flannel/ControllerPod 6.01
398 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
399 TestNetworkPlugins/group/flannel/NetCatPod 11.27
400 TestNetworkPlugins/group/flannel/DNS 0.18
401 TestNetworkPlugins/group/flannel/Localhost 0.17
402 TestNetworkPlugins/group/flannel/HairPin 0.17
403 TestNetworkPlugins/group/kubenet/Start 82.22
404 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
405 TestNetworkPlugins/group/bridge/NetCatPod 11.37
406 TestNetworkPlugins/group/bridge/DNS 0.26
407 TestNetworkPlugins/group/bridge/Localhost 0.22
408 TestNetworkPlugins/group/bridge/HairPin 0.21
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
410 TestNetworkPlugins/group/kubenet/NetCatPod 11.26
411 TestNetworkPlugins/group/kubenet/DNS 0.17
412 TestNetworkPlugins/group/kubenet/Localhost 0.15
413 TestNetworkPlugins/group/kubenet/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (13.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-111357 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-111357 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.82415844s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-111357
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-111357: exit status 85 (74.169261ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-111357 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |          |
	|         | -p download-only-111357        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:39.803653    7592 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:39.803782    7592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:39.803792    7592 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:39.803797    7592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:39.804060    7592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	W0829 18:05:39.804213    7592 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19531-2266/.minikube/config/config.json: open /home/jenkins/minikube-integration/19531-2266/.minikube/config/config.json: no such file or directory
	I0829 18:05:39.804643    7592 out.go:352] Setting JSON to true
	I0829 18:05:39.805437    7592 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2881,"bootTime":1724951859,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0829 18:05:39.805509    7592 start.go:139] virtualization:  
	I0829 18:05:39.808459    7592 out.go:97] [download-only-111357] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0829 18:05:39.808617    7592 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-2266/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:05:39.808672    7592 notify.go:220] Checking for updates...
	I0829 18:05:39.810285    7592 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:39.812341    7592 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:39.814014    7592 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	I0829 18:05:39.816057    7592 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	I0829 18:05:39.817582    7592 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0829 18:05:39.821239    7592 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:39.821478    7592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:39.849989    7592 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:39.850099    7592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:40.204698    7592 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-29 18:05:40.194865571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:05:40.204821    7592 docker.go:307] overlay module found
	I0829 18:05:40.206781    7592 out.go:97] Using the docker driver based on user configuration
	I0829 18:05:40.206818    7592 start.go:297] selected driver: docker
	I0829 18:05:40.206825    7592 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:40.206935    7592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:40.264868    7592 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-29 18:05:40.254594989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:05:40.265052    7592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:40.265431    7592 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0829 18:05:40.265604    7592 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:40.267691    7592 out.go:169] Using Docker driver with root privileges
	I0829 18:05:40.269750    7592 cni.go:84] Creating CNI manager for ""
	I0829 18:05:40.269786    7592 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 18:05:40.269871    7592 start.go:340] cluster config:
	{Name:download-only-111357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-111357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:40.271864    7592 out.go:97] Starting "download-only-111357" primary control-plane node in "download-only-111357" cluster
	I0829 18:05:40.271905    7592 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:05:40.273582    7592 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:05:40.273629    7592 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:40.273676    7592 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:05:40.288853    7592 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:40.289029    7592 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:05:40.289137    7592 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:40.334493    7592 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 18:05:40.334518    7592 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:40.334671    7592 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:40.337158    7592 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 18:05:40.337188    7592 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 18:05:40.434935    7592 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19531-2266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 18:05:44.271936    7592 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 18:05:44.272053    7592 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19531-2266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 18:05:45.391705    7592 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 18:05:45.392219    7592 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/download-only-111357/config.json ...
	I0829 18:05:45.392280    7592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/download-only-111357/config.json: {Name:mkfc1033e324b92efb757454bb7138361d48fa21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:45.392509    7592 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:45.392737    7592 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19531-2266/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-111357 host does not exist
	  To start a cluster, run: "minikube start -p download-only-111357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-111357
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-563162 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-563162 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.790825958s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-563162
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-563162: exit status 85 (69.586269ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-111357 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-111357        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-111357        | download-only-111357 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only        | download-only-563162 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-563162        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:54.069439    7796 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:54.069647    7796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:54.069663    7796 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:54.069670    7796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:54.069924    7796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:05:54.070355    7796 out.go:352] Setting JSON to true
	I0829 18:05:54.071117    7796 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2895,"bootTime":1724951859,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0829 18:05:54.071203    7796 start.go:139] virtualization:  
	I0829 18:05:54.073930    7796 out.go:97] [download-only-563162] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0829 18:05:54.074217    7796 notify.go:220] Checking for updates...
	I0829 18:05:54.076961    7796 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:54.079100    7796 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:54.080889    7796 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	I0829 18:05:54.083024    7796 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	I0829 18:05:54.085182    7796 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0829 18:05:54.089484    7796 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:54.089830    7796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:54.111699    7796 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:54.111802    7796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:54.176760    7796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-29 18:05:54.166787465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:05:54.176869    7796 docker.go:307] overlay module found
	I0829 18:05:54.179175    7796 out.go:97] Using the docker driver based on user configuration
	I0829 18:05:54.179201    7796 start.go:297] selected driver: docker
	I0829 18:05:54.179208    7796 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:54.179314    7796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:54.234697    7796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-29 18:05:54.225436318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:05:54.234883    7796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:54.235168    7796 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0829 18:05:54.235327    7796 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:54.237533    7796 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-563162 host does not exist
	  To start a cluster, run: "minikube start -p download-only-563162"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-563162
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.94s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-832860 --alsologtostderr --binary-mirror http://127.0.0.1:42491 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-832860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-832860
--- PASS: TestBinaryMirror (0.94s)

                                                
                                    
x
+
TestOffline (57.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-753980 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-753980 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (54.969094301s)
helpers_test.go:175: Cleaning up "offline-docker-753980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-753980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-753980: (2.36439958s)
--- PASS: TestOffline (57.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-399511
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-399511: exit status 85 (65.756411ms)

                                                
                                                
-- stdout --
	* Profile "addons-399511" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-399511"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-399511
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-399511: exit status 85 (83.205628ms)

                                                
                                                
-- stdout --
	* Profile "addons-399511" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-399511"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (223.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-399511 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-399511 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m43.102130592s)
--- PASS: TestAddons/Setup (223.10s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 66.245292ms
addons_test.go:905: volcano-admission stabilized in 68.266638ms
addons_test.go:897: volcano-scheduler stabilized in 69.035519ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-6dxmd" [3c1c4341-c629-44de-aae6-c6d93eae2b66] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005080007s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-74474" [215acb7d-41cb-4272-9c36-28398cb449e6] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.005470338s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-lbpbm" [b1122d27-b7f3-459a-8100-34605aa44877] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004947752s
addons_test.go:932: (dbg) Run:  kubectl --context addons-399511 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-399511 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-399511 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [556a6747-4396-4e2b-8992-34e10bd9db00] Pending
helpers_test.go:344: "test-job-nginx-0" [556a6747-4396-4e2b-8992-34e10bd9db00] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [556a6747-4396-4e2b-8992-34e10bd9db00] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004275894s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 addons disable volcano --alsologtostderr -v=1: (10.603839092s)
--- PASS: TestAddons/serial/Volcano (43.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-399511 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-399511 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-399511 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-399511 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-399511 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e1f8cd3c-dd3d-498a-ad1f-401c8c9baea5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e1f8cd3c-dd3d-498a-ad1f-401c8c9baea5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003950281s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-399511 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable ingress-dns --alsologtostderr -v=1
2024/08/29 18:19:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 addons disable ingress-dns --alsologtostderr -v=1: (1.497222898s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 addons disable ingress --alsologtostderr -v=1: (7.947494809s)
--- PASS: TestAddons/parallel/Ingress (20.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zqjqz" [120d96a4-2ccd-4a69-b003-afeb8051a784] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004796235s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-399511
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-399511: (5.868290121s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.404801ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-bxwfw" [cfb12d61-e779-43c1-b037-4307e16e276b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004854397s
addons_test.go:417: (dbg) Run:  kubectl --context addons-399511 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.518059ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-399511 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-399511 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [81c0444d-10a3-4735-877c-97ecf9b18ca1] Pending
helpers_test.go:344: "task-pv-pod" [81c0444d-10a3-4735-877c-97ecf9b18ca1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [81c0444d-10a3-4735-877c-97ecf9b18ca1] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004236597s
addons_test.go:590: (dbg) Run:  kubectl --context addons-399511 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-399511 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-399511 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-399511 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-399511 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-399511 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-399511 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6743a9a9-4339-41a4-8f45-6b8c9330febf] Pending
helpers_test.go:344: "task-pv-pod-restore" [6743a9a9-4339-41a4-8f45-6b8c9330febf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6743a9a9-4339-41a4-8f45-6b8c9330febf] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003838394s
addons_test.go:632: (dbg) Run:  kubectl --context addons-399511 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-399511 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-399511 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.752100301s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-399511 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-kl4dn" [f372ce09-c400-4bc5-ae25-7971fcc6956c] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-kl4dn" [f372ce09-c400-4bc5-ae25-7971fcc6956c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-kl4dn" [f372ce09-c400-4bc5-ae25-7971fcc6956c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003829409s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 addons disable headlamp --alsologtostderr -v=1: (5.716276674s)
--- PASS: TestAddons/parallel/Headlamp (16.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-6ck78" [d18f99ee-c9c1-43f1-a959-6c6280e3afbe] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003669972s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-399511
--- PASS: TestAddons/parallel/CloudSpanner (6.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-399511 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-399511 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399511 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [080efb5c-81ee-4165-8b29-7b192ad32eb7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [080efb5c-81ee-4165-8b29-7b192ad32eb7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [080efb5c-81ee-4165-8b29-7b192ad32eb7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005595613s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-399511 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 ssh "cat /opt/local-path-provisioner/pvc-52ac731d-38d4-4d01-82d4-9547704a677c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-399511 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-399511 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.431282755s)
--- PASS: TestAddons/parallel/LocalPath (53.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dqrjn" [5522a38d-8785-471b-8ec0-3e5d151909ad] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003519032s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-399511
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jm8hb" [e5c21211-58f2-4a13-b60d-74c8fe455147] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009098163s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-399511 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-399511 addons disable yakd --alsologtostderr -v=1: (6.046975994s)
--- PASS: TestAddons/parallel/Yakd (11.06s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-399511
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-399511: (5.668922852s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-399511
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-399511
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-399511
--- PASS: TestAddons/StoppedEnableDisable (6.00s)

                                                
                                    
x
+
TestCertOptions (46.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-547908 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-547908 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (43.649559845s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-547908 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-547908 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-547908 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-547908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-547908
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-547908: (2.31290459s)
--- PASS: TestCertOptions (46.72s)

                                                
                                    
x
+
TestCertExpiration (256.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-285129 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-285129 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (43.986017083s)
E0829 19:09:27.583896    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:09:44.161838    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-285129 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-285129 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (30.107546462s)
helpers_test.go:175: Cleaning up "cert-expiration-285129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-285129
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-285129: (2.409658047s)
--- PASS: TestCertExpiration (256.50s)

                                                
                                    
x
+
TestDockerFlags (43.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-874463 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0829 19:05:47.204942    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-874463 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.585127925s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-874463 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-874463 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-874463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-874463
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-874463: (2.245747128s)
--- PASS: TestDockerFlags (43.43s)

                                                
                                    
x
+
TestForceSystemdFlag (43.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-698011 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-698011 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.851384431s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-698011 ssh "docker info --format {{.CgroupDriver}}"
E0829 19:08:03.343269    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:175: Cleaning up "force-systemd-flag-698011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-698011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-698011: (2.307559124s)
--- PASS: TestForceSystemdFlag (43.70s)

                                                
                                    
x
+
TestForceSystemdEnv (40.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-550231 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-550231 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.088667408s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-550231 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-550231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-550231
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-550231: (2.915397566s)
--- PASS: TestForceSystemdEnv (40.49s)

                                                
                                    
x
+
TestErrorSpam/setup (32.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-295128 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-295128 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-295128 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-295128 --driver=docker  --container-runtime=docker: (32.352856666s)
--- PASS: TestErrorSpam/setup (32.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (10.98s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 stop: (10.797270282s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-295128 --log_dir /tmp/nospam-295128 stop
--- PASS: TestErrorSpam/stop (10.98s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19531-2266/.minikube/files/etc/test/nested/copy/7586/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491299 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-491299 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m16.857270616s)
--- PASS: TestFunctional/serial/StartWithProxy (76.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491299 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-491299 --alsologtostderr -v=8: (31.329833552s)
functional_test.go:663: soft start took 31.334132525s for "functional-491299" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-491299 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-491299 cache add registry.k8s.io/pause:3.1: (1.332315732s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-491299 cache add registry.k8s.io/pause:3.3: (1.421337827s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-491299 cache add registry.k8s.io/pause:latest: (1.178910952s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-491299 /tmp/TestFunctionalserialCacheCmdcacheadd_local2743867308/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cache add minikube-local-cache-test:functional-491299
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cache delete minikube-local-cache-test:functional-491299
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-491299
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.936303ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 kubectl -- --context functional-491299 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-491299 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491299 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-491299 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.421660334s)
functional_test.go:761: restart took 44.421768854s for "functional-491299" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-491299 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-491299 logs: (1.192858005s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 logs --file /tmp/TestFunctionalserialLogsFileCmd456404980/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-491299 logs --file /tmp/TestFunctionalserialLogsFileCmd456404980/001/logs.txt: (1.295457774s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-491299 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-491299
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-491299: exit status 115 (578.11981ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30755 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-491299 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-491299 delete -f testdata/invalidsvc.yaml: (1.038466908s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 config get cpus: exit status 14 (81.90941ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 config get cpus: exit status 14 (84.54208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-491299 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-491299 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48834: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-491299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (256.81556ms)

                                                
                                                
-- stdout --
	* [functional-491299] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:24:59.617668   48384 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:24:59.617877   48384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:59.617889   48384 out.go:358] Setting ErrFile to fd 2...
	I0829 18:24:59.617894   48384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:59.618192   48384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:24:59.618691   48384 out.go:352] Setting JSON to false
	I0829 18:24:59.619987   48384 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4041,"bootTime":1724951859,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0829 18:24:59.620067   48384 start.go:139] virtualization:  
	I0829 18:24:59.622572   48384 out.go:177] * [functional-491299] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0829 18:24:59.624737   48384 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:24:59.624792   48384 notify.go:220] Checking for updates...
	I0829 18:24:59.629267   48384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:24:59.632102   48384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	I0829 18:24:59.633750   48384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	I0829 18:24:59.635431   48384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0829 18:24:59.637115   48384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:24:59.639381   48384 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:24:59.639934   48384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:24:59.684486   48384 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:24:59.684618   48384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:24:59.785450   48384 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-29 18:24:59.774366855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:24:59.785559   48384 docker.go:307] overlay module found
	I0829 18:24:59.789603   48384 out.go:177] * Using the docker driver based on existing profile
	I0829 18:24:59.791173   48384 start.go:297] selected driver: docker
	I0829 18:24:59.791191   48384 start.go:901] validating driver "docker" against &{Name:functional-491299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-491299 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:24:59.791301   48384 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:24:59.793468   48384 out.go:201] 
	W0829 18:24:59.794984   48384 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 18:24:59.796523   48384 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491299 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-491299 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (260.270995ms)

                                                
                                                
-- stdout --
	* [functional-491299] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:24:59.379219   48286 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:24:59.379421   48286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:59.379438   48286 out.go:358] Setting ErrFile to fd 2...
	I0829 18:24:59.379443   48286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:59.379915   48286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:24:59.380410   48286 out.go:352] Setting JSON to false
	I0829 18:24:59.381598   48286 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4040,"bootTime":1724951859,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0829 18:24:59.381732   48286 start.go:139] virtualization:  
	I0829 18:24:59.387602   48286 out.go:177] * [functional-491299] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0829 18:24:59.389346   48286 notify.go:220] Checking for updates...
	I0829 18:24:59.391187   48286 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:24:59.393250   48286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:24:59.394914   48286 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	I0829 18:24:59.396520   48286 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	I0829 18:24:59.398041   48286 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0829 18:24:59.399632   48286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:24:59.401599   48286 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:24:59.402108   48286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:24:59.441062   48286 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:24:59.441186   48286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:24:59.528808   48286 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-29 18:24:59.513195797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:24:59.529127   48286 docker.go:307] overlay module found
	I0829 18:24:59.532095   48286 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0829 18:24:59.533662   48286 start.go:297] selected driver: docker
	I0829 18:24:59.533691   48286 start.go:901] validating driver "docker" against &{Name:functional-491299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-491299 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:24:59.533844   48286 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:24:59.536005   48286 out.go:201] 
	W0829 18:24:59.537560   48286 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 18:24:59.539096   48286 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-491299 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-491299 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-m6dqn" [4a83946f-3368-4fa2-95fc-362e32fe6538] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-m6dqn" [4a83946f-3368-4fa2-95fc-362e32fe6538] Running
E0829 18:24:44.164002    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:44.171130    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:44.182688    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:44.204186    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:44.245617    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:44.327057    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:44.488541    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:44.810563    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:45.452620    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:46.733966    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00405811s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31554
functional_test.go:1675: http://192.168.49.2:31554: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-m6dqn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31554
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5b17871e-d18f-47f3-ad31-f7b7e9c4f69f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004328249s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-491299 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-491299 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-491299 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-491299 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3494adb4-0cf9-4611-b2a2-249e90e33457] Pending
helpers_test.go:344: "sp-pod" [3494adb4-0cf9-4611-b2a2-249e90e33457] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3494adb4-0cf9-4611-b2a2-249e90e33457] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004447125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-491299 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-491299 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-491299 delete -f testdata/storage-provisioner/pod.yaml: (1.115806113s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-491299 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ac4dd053-aa3e-4027-93f4-f98133d1e76a] Pending
helpers_test.go:344: "sp-pod" [ac4dd053-aa3e-4027-93f4-f98133d1e76a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0829 18:24:49.295806    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [ac4dd053-aa3e-4027-93f4-f98133d1e76a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004943477s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-491299 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.20s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh -n functional-491299 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cp functional-491299:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd516883618/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh -n functional-491299 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh -n functional-491299 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7586/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo cat /etc/test/nested/copy/7586/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7586.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo cat /etc/ssl/certs/7586.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7586.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo cat /usr/share/ca-certificates/7586.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo cat /etc/ssl/certs/75862.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo cat /usr/share/ca-certificates/75862.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-491299 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 ssh "sudo systemctl is-active crio": exit status 1 (278.025954ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-491299 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-491299 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-491299 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45883: os: process already finished
helpers_test.go:502: unable to terminate pid 45710: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-491299 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-491299 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-491299 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [25143667-1c7c-4b5f-b4e5-e1b8b6053b96] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [25143667-1c7c-4b5f-b4e5-e1b8b6053b96] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004565224s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-491299 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.222.253 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-491299 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-491299 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-491299 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-ztps6" [2a7ec78f-433a-4178-9a70-7fc3d78ae820] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-ztps6" [2a7ec78f-433a-4178-9a70-7fc3d78ae820] Running
E0829 18:24:54.417831    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003792467s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 service list -o json
functional_test.go:1494: Took "537.159405ms" to run "out/minikube-linux-arm64 -p functional-491299 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31063
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "452.314977ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "96.524818ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31063
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "447.152954ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "108.469967ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdany-port1641715702/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724955898021319474" to /tmp/TestFunctionalparallelMountCmdany-port1641715702/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724955898021319474" to /tmp/TestFunctionalparallelMountCmdany-port1641715702/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724955898021319474" to /tmp/TestFunctionalparallelMountCmdany-port1641715702/001/test-1724955898021319474
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (500.636078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 test-1724955898021319474
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh cat /mount-9p/test-1724955898021319474
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-491299 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [83f41780-b073-431f-bdb2-0a4e2be1daff] Pending
helpers_test.go:344: "busybox-mount" [83f41780-b073-431f-bdb2-0a4e2be1daff] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0829 18:25:04.659676    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [83f41780-b073-431f-bdb2-0a4e2be1daff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [83f41780-b073-431f-bdb2-0a4e2be1daff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003761496s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-491299 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdany-port1641715702/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdspecific-port2753786645/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (533.850655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdspecific-port2753786645/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 ssh "sudo umount -f /mount-9p": exit status 1 (322.656915ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-491299 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdspecific-port2753786645/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup929264940/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup929264940/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup929264940/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T" /mount1: exit status 1 (745.932259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-491299 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup929264940/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup929264940/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup929264940/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-491299 version -o=json --components: (1.191986919s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491299 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-491299
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-491299
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491299 image ls --format short --alsologtostderr:
I0829 18:25:19.978658   51743 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:19.978828   51743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:19.978838   51743 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:19.978843   51743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:19.979085   51743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
I0829 18:25:19.979737   51743 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:19.979862   51743 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:19.980386   51743 cli_runner.go:164] Run: docker container inspect functional-491299 --format={{.State.Status}}
I0829 18:25:19.998247   51743 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:19.998299   51743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491299
I0829 18:25:20.021195   51743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/functional-491299/id_rsa Username:docker}
I0829 18:25:20.125312   51743 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491299 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-491299 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-491299 | 224d8bd3a3f2a | 30B    |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491299 image ls --format table --alsologtostderr:
I0829 18:25:20.707482   51960 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:20.707622   51960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.707633   51960 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:20.707639   51960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.707925   51960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
I0829 18:25:20.708615   51960 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.708747   51960 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.709217   51960 cli_runner.go:164] Run: docker container inspect functional-491299 --format={{.State.Status}}
I0829 18:25:20.726872   51960 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:20.726929   51960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491299
I0829 18:25:20.750526   51960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/functional-491299/id_rsa Username:docker}
I0829 18:25:20.845373   51960 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491299 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"224d8bd3a3f2adf07be24c5e203ce99c39aa7070d6c8e69befe8bab2e64aa340","repoD
igests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-491299"],"size":"30"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashbo
ard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-491299"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"si
ze":"94700000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491299 image ls --format json --alsologtostderr:
I0829 18:25:20.456822   51897 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:20.456976   51897 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.456986   51897 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:20.456992   51897 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.457211   51897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
I0829 18:25:20.458022   51897 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.458157   51897 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.458705   51897 cli_runner.go:164] Run: docker container inspect functional-491299 --format={{.State.Status}}
I0829 18:25:20.476204   51897 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:20.476255   51897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491299
I0829 18:25:20.494528   51897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/functional-491299/id_rsa Username:docker}
I0829 18:25:20.606021   51897 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491299 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-491299
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 224d8bd3a3f2adf07be24c5e203ce99c39aa7070d6c8e69befe8bab2e64aa340
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-491299
size: "30"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491299 image ls --format yaml --alsologtostderr:
I0829 18:25:20.189484   51801 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:20.189648   51801 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.189658   51801 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:20.189663   51801 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.189929   51801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
I0829 18:25:20.190572   51801 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.190707   51801 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.191220   51801 cli_runner.go:164] Run: docker container inspect functional-491299 --format={{.State.Status}}
I0829 18:25:20.218504   51801 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:20.218560   51801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491299
I0829 18:25:20.241065   51801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/functional-491299/id_rsa Username:docker}
I0829 18:25:20.345541   51801 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491299 ssh pgrep buildkitd: exit status 1 (355.774788ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image build -t localhost/my-image:functional-491299 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-491299 image build -t localhost/my-image:functional-491299 testdata/build --alsologtostderr: (2.785883035s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491299 image build -t localhost/my-image:functional-491299 testdata/build --alsologtostderr:
I0829 18:25:20.586496   51926 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:20.586747   51926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.586760   51926 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:20.586766   51926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:20.587153   51926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
I0829 18:25:20.588202   51926 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.589049   51926 config.go:182] Loaded profile config "functional-491299": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:25:20.589610   51926 cli_runner.go:164] Run: docker container inspect functional-491299 --format={{.State.Status}}
I0829 18:25:20.612702   51926 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:20.612770   51926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491299
I0829 18:25:20.636398   51926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/functional-491299/id_rsa Username:docker}
I0829 18:25:20.740477   51926 build_images.go:161] Building image from path: /tmp/build.1347815131.tar
I0829 18:25:20.740545   51926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 18:25:20.752469   51926 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1347815131.tar
I0829 18:25:20.759513   51926 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1347815131.tar: stat -c "%s %y" /var/lib/minikube/build/build.1347815131.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1347815131.tar': No such file or directory
I0829 18:25:20.759539   51926 ssh_runner.go:362] scp /tmp/build.1347815131.tar --> /var/lib/minikube/build/build.1347815131.tar (3072 bytes)
I0829 18:25:20.794954   51926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1347815131
I0829 18:25:20.805985   51926 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1347815131 -xf /var/lib/minikube/build/build.1347815131.tar
I0829 18:25:20.815596   51926 docker.go:360] Building image: /var/lib/minikube/build/build.1347815131
I0829 18:25:20.815667   51926 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-491299 /var/lib/minikube/build/build.1347815131
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8de3c9bee9ae186c3421169c5214289c7cb2c2839b827cd8c812e74c95d11125 done
#8 naming to localhost/my-image:functional-491299 done
#8 DONE 0.1s
I0829 18:25:23.285655   51926 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-491299 /var/lib/minikube/build/build.1347815131: (2.469963394s)
I0829 18:25:23.285884   51926 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1347815131
I0829 18:25:23.294829   51926 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1347815131.tar
I0829 18:25:23.303592   51926 build_images.go:217] Built localhost/my-image:functional-491299 from /tmp/build.1347815131.tar
I0829 18:25:23.303621   51926 build_images.go:133] succeeded building to: functional-491299
I0829 18:25:23.303626   51926 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.207792079s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-491299
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image load --daemon kicbase/echo-server:functional-491299 --alsologtostderr
2024/08/29 18:25:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-491299 docker-env) && out/minikube-linux-arm64 status -p functional-491299"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-491299 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image load --daemon kicbase/echo-server:functional-491299 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-491299
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image load --daemon kicbase/echo-server:functional-491299 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image save kicbase/echo-server:functional-491299 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image rm kicbase/echo-server:functional-491299 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-491299
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-491299 image save --daemon kicbase/echo-server:functional-491299 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-491299
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-491299
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-491299
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-491299
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (128.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-393093 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 18:26:06.104480    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:28.025887    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-393093 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m7.52936563s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (128.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-393093 -- rollout status deployment/busybox: (5.006633699s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-srr4h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-vbhxd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-w5r9r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-srr4h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-vbhxd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-w5r9r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-srr4h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-vbhxd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-w5r9r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-srr4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-srr4h -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-vbhxd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-vbhxd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-w5r9r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-393093 -- exec busybox-7dff88458-w5r9r -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-393093 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-393093 -v=7 --alsologtostderr: (25.798348393s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr: (1.071582305s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-393093 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 status --output json -v=7 --alsologtostderr: (1.078813025s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp testdata/cp-test.txt ha-393093:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2035439334/001/cp-test_ha-393093.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093:/home/docker/cp-test.txt ha-393093-m02:/home/docker/cp-test_ha-393093_ha-393093-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test_ha-393093_ha-393093-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093:/home/docker/cp-test.txt ha-393093-m03:/home/docker/cp-test_ha-393093_ha-393093-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test_ha-393093_ha-393093-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093:/home/docker/cp-test.txt ha-393093-m04:/home/docker/cp-test_ha-393093_ha-393093-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test_ha-393093_ha-393093-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp testdata/cp-test.txt ha-393093-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2035439334/001/cp-test_ha-393093-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m02:/home/docker/cp-test.txt ha-393093:/home/docker/cp-test_ha-393093-m02_ha-393093.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test_ha-393093-m02_ha-393093.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m02:/home/docker/cp-test.txt ha-393093-m03:/home/docker/cp-test_ha-393093-m02_ha-393093-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test_ha-393093-m02_ha-393093-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m02:/home/docker/cp-test.txt ha-393093-m04:/home/docker/cp-test_ha-393093-m02_ha-393093-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test_ha-393093-m02_ha-393093-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp testdata/cp-test.txt ha-393093-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2035439334/001/cp-test_ha-393093-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m03:/home/docker/cp-test.txt ha-393093:/home/docker/cp-test_ha-393093-m03_ha-393093.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test_ha-393093-m03_ha-393093.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m03:/home/docker/cp-test.txt ha-393093-m02:/home/docker/cp-test_ha-393093-m03_ha-393093-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test_ha-393093-m03_ha-393093-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m03:/home/docker/cp-test.txt ha-393093-m04:/home/docker/cp-test_ha-393093-m03_ha-393093-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test_ha-393093-m03_ha-393093-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp testdata/cp-test.txt ha-393093-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2035439334/001/cp-test_ha-393093-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m04:/home/docker/cp-test.txt ha-393093:/home/docker/cp-test_ha-393093-m04_ha-393093.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093 "sudo cat /home/docker/cp-test_ha-393093-m04_ha-393093.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m04:/home/docker/cp-test.txt ha-393093-m02:/home/docker/cp-test_ha-393093-m04_ha-393093-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m02 "sudo cat /home/docker/cp-test_ha-393093-m04_ha-393093-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 cp ha-393093-m04:/home/docker/cp-test.txt ha-393093-m03:/home/docker/cp-test_ha-393093-m04_ha-393093-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 ssh -n ha-393093-m03 "sudo cat /home/docker/cp-test_ha-393093-m04_ha-393093-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 node stop m02 -v=7 --alsologtostderr: (10.841057309s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr: exit status 7 (783.991129ms)

                                                
                                                
-- stdout --
	ha-393093
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-393093-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-393093-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-393093-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:28:43.048386   74263 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:28:43.048572   74263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:28:43.048587   74263 out.go:358] Setting ErrFile to fd 2...
	I0829 18:28:43.048593   74263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:28:43.048890   74263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:28:43.049140   74263 out.go:352] Setting JSON to false
	I0829 18:28:43.049223   74263 mustload.go:65] Loading cluster: ha-393093
	I0829 18:28:43.049332   74263 notify.go:220] Checking for updates...
	I0829 18:28:43.049764   74263 config.go:182] Loaded profile config "ha-393093": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:28:43.049803   74263 status.go:255] checking status of ha-393093 ...
	I0829 18:28:43.050705   74263 cli_runner.go:164] Run: docker container inspect ha-393093 --format={{.State.Status}}
	I0829 18:28:43.070779   74263 status.go:330] ha-393093 host status = "Running" (err=<nil>)
	I0829 18:28:43.070802   74263 host.go:66] Checking if "ha-393093" exists ...
	I0829 18:28:43.071162   74263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-393093
	I0829 18:28:43.095824   74263 host.go:66] Checking if "ha-393093" exists ...
	I0829 18:28:43.096263   74263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:28:43.096372   74263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-393093
	I0829 18:28:43.114700   74263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/ha-393093/id_rsa Username:docker}
	I0829 18:28:43.213725   74263 ssh_runner.go:195] Run: systemctl --version
	I0829 18:28:43.218787   74263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:43.231984   74263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:28:43.310669   74263 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-29 18:28:43.299410824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:28:43.311249   74263 kubeconfig.go:125] found "ha-393093" server: "https://192.168.49.254:8443"
	I0829 18:28:43.311281   74263 api_server.go:166] Checking apiserver status ...
	I0829 18:28:43.311344   74263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:28:43.323604   74263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2343/cgroup
	I0829 18:28:43.333706   74263 api_server.go:182] apiserver freezer: "9:freezer:/docker/3036c8aaf8f05be099a894d512dc01233fca05a32fb3d41e45efaf75d6dca700/kubepods/burstable/pod48628b7c17e4f17c31d6f5da403e5bda/4fd7eefee2c8a205bac96a77ca89ef56f9e0926dea26859b0b9c1e2f087b6cbe"
	I0829 18:28:43.333873   74263 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3036c8aaf8f05be099a894d512dc01233fca05a32fb3d41e45efaf75d6dca700/kubepods/burstable/pod48628b7c17e4f17c31d6f5da403e5bda/4fd7eefee2c8a205bac96a77ca89ef56f9e0926dea26859b0b9c1e2f087b6cbe/freezer.state
	I0829 18:28:43.342709   74263 api_server.go:204] freezer state: "THAWED"
	I0829 18:28:43.342738   74263 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 18:28:43.350389   74263 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 18:28:43.350428   74263 status.go:422] ha-393093 apiserver status = Running (err=<nil>)
	I0829 18:28:43.350455   74263 status.go:257] ha-393093 status: &{Name:ha-393093 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:28:43.350478   74263 status.go:255] checking status of ha-393093-m02 ...
	I0829 18:28:43.350798   74263 cli_runner.go:164] Run: docker container inspect ha-393093-m02 --format={{.State.Status}}
	I0829 18:28:43.375626   74263 status.go:330] ha-393093-m02 host status = "Stopped" (err=<nil>)
	I0829 18:28:43.375647   74263 status.go:343] host is not running, skipping remaining checks
	I0829 18:28:43.375654   74263 status.go:257] ha-393093-m02 status: &{Name:ha-393093-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:28:43.375680   74263 status.go:255] checking status of ha-393093-m03 ...
	I0829 18:28:43.376047   74263 cli_runner.go:164] Run: docker container inspect ha-393093-m03 --format={{.State.Status}}
	I0829 18:28:43.395959   74263 status.go:330] ha-393093-m03 host status = "Running" (err=<nil>)
	I0829 18:28:43.395998   74263 host.go:66] Checking if "ha-393093-m03" exists ...
	I0829 18:28:43.396431   74263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-393093-m03
	I0829 18:28:43.420261   74263 host.go:66] Checking if "ha-393093-m03" exists ...
	I0829 18:28:43.420729   74263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:28:43.420776   74263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-393093-m03
	I0829 18:28:43.439900   74263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/ha-393093-m03/id_rsa Username:docker}
	I0829 18:28:43.534389   74263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:43.548137   74263 kubeconfig.go:125] found "ha-393093" server: "https://192.168.49.254:8443"
	I0829 18:28:43.548261   74263 api_server.go:166] Checking apiserver status ...
	I0829 18:28:43.548525   74263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:28:43.566396   74263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2206/cgroup
	I0829 18:28:43.581871   74263 api_server.go:182] apiserver freezer: "9:freezer:/docker/4683694254edbf60368633ba521cb06b6bbc57feedaf147acde83bac9e48efe3/kubepods/burstable/podc2f9d0912f7fbeab6b8d59ea98d68eac/baccb33c7d680687d7deb99d7436ca20705bd3d646e91aee7b68b1fe60186a30"
	I0829 18:28:43.581945   74263 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4683694254edbf60368633ba521cb06b6bbc57feedaf147acde83bac9e48efe3/kubepods/burstable/podc2f9d0912f7fbeab6b8d59ea98d68eac/baccb33c7d680687d7deb99d7436ca20705bd3d646e91aee7b68b1fe60186a30/freezer.state
	I0829 18:28:43.592427   74263 api_server.go:204] freezer state: "THAWED"
	I0829 18:28:43.592460   74263 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 18:28:43.602354   74263 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 18:28:43.602383   74263 status.go:422] ha-393093-m03 apiserver status = Running (err=<nil>)
	I0829 18:28:43.602394   74263 status.go:257] ha-393093-m03 status: &{Name:ha-393093-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:28:43.602431   74263 status.go:255] checking status of ha-393093-m04 ...
	I0829 18:28:43.602773   74263 cli_runner.go:164] Run: docker container inspect ha-393093-m04 --format={{.State.Status}}
	I0829 18:28:43.619427   74263 status.go:330] ha-393093-m04 host status = "Running" (err=<nil>)
	I0829 18:28:43.619450   74263 host.go:66] Checking if "ha-393093-m04" exists ...
	I0829 18:28:43.619764   74263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-393093-m04
	I0829 18:28:43.635943   74263 host.go:66] Checking if "ha-393093-m04" exists ...
	I0829 18:28:43.636570   74263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:28:43.636635   74263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-393093-m04
	I0829 18:28:43.662540   74263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/ha-393093-m04/id_rsa Username:docker}
	I0829 18:28:43.758177   74263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:43.770112   74263 status.go:257] ha-393093-m04 status: &{Name:ha-393093-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 node start m02 -v=7 --alsologtostderr: (30.866453721s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr: (1.119152501s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0829 18:29:27.584957    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:27.591536    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:27.603015    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:27.624587    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:27.665938    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:27.747324    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:27.908909    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:28.230639    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:28.872727    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:30.154166    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (16.2465148s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (254.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-393093 -v=7 --alsologtostderr
E0829 18:29:32.716290    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-393093 -v=7 --alsologtostderr
E0829 18:29:37.837736    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:44.161474    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:48.079064    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-393093 -v=7 --alsologtostderr: (34.079702452s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-393093 --wait=true -v=7 --alsologtostderr
E0829 18:30:08.560631    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:30:11.867569    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:30:49.522255    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:32:11.444503    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-393093 --wait=true -v=7 --alsologtostderr: (3m39.864400633s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-393093
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (254.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 node delete m03 -v=7 --alsologtostderr: (10.911516221s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 stop -v=7 --alsologtostderr
E0829 18:34:27.584255    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 stop -v=7 --alsologtostderr: (32.818809549s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr: exit status 7 (112.368134ms)

                                                
                                                
-- stdout --
	ha-393093
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-393093-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-393093-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:34:32.061715  101662 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:34:32.061872  101662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:34:32.061884  101662 out.go:358] Setting ErrFile to fd 2...
	I0829 18:34:32.061889  101662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:34:32.062163  101662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:34:32.062362  101662 out.go:352] Setting JSON to false
	I0829 18:34:32.062400  101662 mustload.go:65] Loading cluster: ha-393093
	I0829 18:34:32.062524  101662 notify.go:220] Checking for updates...
	I0829 18:34:32.062836  101662 config.go:182] Loaded profile config "ha-393093": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:34:32.062857  101662 status.go:255] checking status of ha-393093 ...
	I0829 18:34:32.064474  101662 cli_runner.go:164] Run: docker container inspect ha-393093 --format={{.State.Status}}
	I0829 18:34:32.081107  101662 status.go:330] ha-393093 host status = "Stopped" (err=<nil>)
	I0829 18:34:32.081130  101662 status.go:343] host is not running, skipping remaining checks
	I0829 18:34:32.081138  101662 status.go:257] ha-393093 status: &{Name:ha-393093 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:34:32.081163  101662 status.go:255] checking status of ha-393093-m02 ...
	I0829 18:34:32.081530  101662 cli_runner.go:164] Run: docker container inspect ha-393093-m02 --format={{.State.Status}}
	I0829 18:34:32.104402  101662 status.go:330] ha-393093-m02 host status = "Stopped" (err=<nil>)
	I0829 18:34:32.104436  101662 status.go:343] host is not running, skipping remaining checks
	I0829 18:34:32.104444  101662 status.go:257] ha-393093-m02 status: &{Name:ha-393093-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:34:32.104464  101662 status.go:255] checking status of ha-393093-m04 ...
	I0829 18:34:32.104777  101662 cli_runner.go:164] Run: docker container inspect ha-393093-m04 --format={{.State.Status}}
	I0829 18:34:32.125269  101662 status.go:330] ha-393093-m04 host status = "Stopped" (err=<nil>)
	I0829 18:34:32.125294  101662 status.go:343] host is not running, skipping remaining checks
	I0829 18:34:32.125301  101662 status.go:257] ha-393093-m04 status: &{Name:ha-393093-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (158.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-393093 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 18:34:44.161706    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:34:55.285850    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-393093 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m37.494004967s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (158.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-393093 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-393093 --control-plane -v=7 --alsologtostderr: (46.347269047s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-393093 status -v=7 --alsologtostderr: (1.018137492s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-786600 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-786600 --driver=docker  --container-runtime=docker: (34.108170063s)
--- PASS: TestImageBuild/serial/Setup (34.11s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-786600
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-786600: (2.066590762s)
--- PASS: TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-786600
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-786600: (1.012574836s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-786600
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-786600
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-655588 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0829 18:39:27.584194    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:39:44.162499    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-655588 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m14.871407295s)
--- PASS: TestJSONOutput/start/Command (74.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-655588 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-655588 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-655588 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-655588 --output=json --user=testUser: (5.719147615s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-595801 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-595801 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.086025ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"50bbb1e3-b185-4177-870f-900ab23414d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-595801] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c6e4c8d-0c79-447a-a0f2-77c5e4e27263","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"e7388cbe-0d29-4b46-803a-7ac42626b36e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e91175eb-ac37-4ba2-9b4c-874963662fca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig"}}
	{"specversion":"1.0","id":"cf2f5221-3d51-4130-bc9f-2a33ea6d6d39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube"}}
	{"specversion":"1.0","id":"c7beb90c-1142-461e-8c7a-223fb7f286bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"54fd0818-668f-4590-9e27-22326659ccc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6ac1544a-d5f6-4640-9f62-05fdc8e78488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-595801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-595801
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-128178 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-128178 --network=: (33.116621059s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-128178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-128178
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-128178: (2.057461447s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-776427 --network=bridge
E0829 18:41:07.228948    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-776427 --network=bridge: (32.355923537s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-776427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-776427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-776427: (1.976107966s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.36s)

                                                
                                    
x
+
TestKicExistingNetwork (34.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-787380 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-787380 --network=existing-network: (32.686384727s)
helpers_test.go:175: Cleaning up "existing-network-787380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-787380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-787380: (1.992974485s)
--- PASS: TestKicExistingNetwork (34.84s)

                                                
                                    
x
+
TestKicCustomSubnet (35.63s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-909425 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-909425 --subnet=192.168.60.0/24: (33.580951804s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-909425 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-909425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-909425
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-909425: (2.021740239s)
--- PASS: TestKicCustomSubnet (35.63s)

                                                
                                    
x
+
TestKicStaticIP (34.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-543028 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-543028 --static-ip=192.168.200.200: (32.062562736s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-543028 ip
helpers_test.go:175: Cleaning up "static-ip-543028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-543028
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-543028: (2.183919964s)
--- PASS: TestKicStaticIP (34.44s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.63s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-087095 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-087095 --driver=docker  --container-runtime=docker: (29.742674293s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-090156 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-090156 --driver=docker  --container-runtime=docker: (38.398009886s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-087095
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-090156
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-090156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-090156
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-090156: (2.109111731s)
helpers_test.go:175: Cleaning up "first-087095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-087095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-087095: (2.118135331s)
--- PASS: TestMinikubeProfile (73.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-872773 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0829 18:44:27.585982    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-872773 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.302099746s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-872773 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (11.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-886082 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-886082 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.625961764s)
E0829 18:44:44.162482    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountSecond (11.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-886082 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-872773 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-872773 --alsologtostderr -v=5: (1.465653482s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-886082 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-886082
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-886082: (1.2073785s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-886082
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-886082: (7.694853742s)
--- PASS: TestMountStart/serial/RestartStopped (8.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-886082 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792909 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 18:45:50.647179    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792909 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.688909813s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (49.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-792909 -- rollout status deployment/busybox: (4.475705479s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-8djc8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-h4c9f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-8djc8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-h4c9f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-8djc8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-h4c9f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (49.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-8djc8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-8djc8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-h4c9f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792909 -- exec busybox-7dff88458-h4c9f -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-792909 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-792909 -v 3 --alsologtostderr: (16.100680758s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-792909 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp testdata/cp-test.txt multinode-792909:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile65243447/001/cp-test_multinode-792909.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909:/home/docker/cp-test.txt multinode-792909-m02:/home/docker/cp-test_multinode-792909_multinode-792909-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m02 "sudo cat /home/docker/cp-test_multinode-792909_multinode-792909-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909:/home/docker/cp-test.txt multinode-792909-m03:/home/docker/cp-test_multinode-792909_multinode-792909-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m03 "sudo cat /home/docker/cp-test_multinode-792909_multinode-792909-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp testdata/cp-test.txt multinode-792909-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile65243447/001/cp-test_multinode-792909-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909-m02:/home/docker/cp-test.txt multinode-792909:/home/docker/cp-test_multinode-792909-m02_multinode-792909.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909 "sudo cat /home/docker/cp-test_multinode-792909-m02_multinode-792909.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909-m02:/home/docker/cp-test.txt multinode-792909-m03:/home/docker/cp-test_multinode-792909-m02_multinode-792909-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m03 "sudo cat /home/docker/cp-test_multinode-792909-m02_multinode-792909-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp testdata/cp-test.txt multinode-792909-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile65243447/001/cp-test_multinode-792909-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909-m03:/home/docker/cp-test.txt multinode-792909:/home/docker/cp-test_multinode-792909-m03_multinode-792909.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909 "sudo cat /home/docker/cp-test_multinode-792909-m03_multinode-792909.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 cp multinode-792909-m03:/home/docker/cp-test.txt multinode-792909-m02:/home/docker/cp-test_multinode-792909-m03_multinode-792909-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 ssh -n multinode-792909-m02 "sudo cat /home/docker/cp-test_multinode-792909-m03_multinode-792909-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-792909 node stop m03: (1.215853814s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792909 status: exit status 7 (506.272225ms)

                                                
                                                
-- stdout --
	multinode-792909
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-792909-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-792909-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr: exit status 7 (520.457705ms)

                                                
                                                
-- stdout --
	multinode-792909
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-792909-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-792909-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:47:43.482415  177962 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:47:43.482593  177962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:47:43.482602  177962 out.go:358] Setting ErrFile to fd 2...
	I0829 18:47:43.482607  177962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:47:43.482851  177962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:47:43.483042  177962 out.go:352] Setting JSON to false
	I0829 18:47:43.483087  177962 mustload.go:65] Loading cluster: multinode-792909
	I0829 18:47:43.483170  177962 notify.go:220] Checking for updates...
	I0829 18:47:43.484107  177962 config.go:182] Loaded profile config "multinode-792909": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:47:43.484131  177962 status.go:255] checking status of multinode-792909 ...
	I0829 18:47:43.484705  177962 cli_runner.go:164] Run: docker container inspect multinode-792909 --format={{.State.Status}}
	I0829 18:47:43.514213  177962 status.go:330] multinode-792909 host status = "Running" (err=<nil>)
	I0829 18:47:43.514239  177962 host.go:66] Checking if "multinode-792909" exists ...
	I0829 18:47:43.514561  177962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-792909
	I0829 18:47:43.542987  177962 host.go:66] Checking if "multinode-792909" exists ...
	I0829 18:47:43.543395  177962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:47:43.543445  177962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-792909
	I0829 18:47:43.567689  177962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/multinode-792909/id_rsa Username:docker}
	I0829 18:47:43.665754  177962 ssh_runner.go:195] Run: systemctl --version
	I0829 18:47:43.669934  177962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:47:43.681609  177962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:47:43.737802  177962 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-29 18:47:43.727754876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:47:43.738441  177962 kubeconfig.go:125] found "multinode-792909" server: "https://192.168.67.2:8443"
	I0829 18:47:43.738472  177962 api_server.go:166] Checking apiserver status ...
	I0829 18:47:43.738521  177962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:47:43.750043  177962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2225/cgroup
	I0829 18:47:43.759760  177962 api_server.go:182] apiserver freezer: "9:freezer:/docker/ddde9f03bb88fb33bb6a1861103f526f90eeb3e6b8afb80502d357a931e324a1/kubepods/burstable/pod2e72f5826da75ec9847f05f6f632db3f/192ecd34a31149d60f5ffbcceb94231aded888488373e78206be9d13e9e58a99"
	I0829 18:47:43.759847  177962 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ddde9f03bb88fb33bb6a1861103f526f90eeb3e6b8afb80502d357a931e324a1/kubepods/burstable/pod2e72f5826da75ec9847f05f6f632db3f/192ecd34a31149d60f5ffbcceb94231aded888488373e78206be9d13e9e58a99/freezer.state
	I0829 18:47:43.769067  177962 api_server.go:204] freezer state: "THAWED"
	I0829 18:47:43.769094  177962 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0829 18:47:43.776832  177962 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0829 18:47:43.776863  177962 status.go:422] multinode-792909 apiserver status = Running (err=<nil>)
	I0829 18:47:43.776874  177962 status.go:257] multinode-792909 status: &{Name:multinode-792909 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:47:43.776890  177962 status.go:255] checking status of multinode-792909-m02 ...
	I0829 18:47:43.777188  177962 cli_runner.go:164] Run: docker container inspect multinode-792909-m02 --format={{.State.Status}}
	I0829 18:47:43.795319  177962 status.go:330] multinode-792909-m02 host status = "Running" (err=<nil>)
	I0829 18:47:43.795346  177962 host.go:66] Checking if "multinode-792909-m02" exists ...
	I0829 18:47:43.795662  177962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-792909-m02
	I0829 18:47:43.812215  177962 host.go:66] Checking if "multinode-792909-m02" exists ...
	I0829 18:47:43.812651  177962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:47:43.812701  177962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-792909-m02
	I0829 18:47:43.829179  177962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19531-2266/.minikube/machines/multinode-792909-m02/id_rsa Username:docker}
	I0829 18:47:43.921136  177962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:47:43.932366  177962 status.go:257] multinode-792909-m02 status: &{Name:multinode-792909-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:47:43.932415  177962 status.go:255] checking status of multinode-792909-m03 ...
	I0829 18:47:43.932710  177962 cli_runner.go:164] Run: docker container inspect multinode-792909-m03 --format={{.State.Status}}
	I0829 18:47:43.949624  177962 status.go:330] multinode-792909-m03 host status = "Stopped" (err=<nil>)
	I0829 18:47:43.949645  177962 status.go:343] host is not running, skipping remaining checks
	I0829 18:47:43.949652  177962 status.go:257] multinode-792909-m03 status: &{Name:multinode-792909-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-792909 node start m03 -v=7 --alsologtostderr: (10.170508376s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (104.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-792909
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-792909
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-792909: (22.596258753s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792909 --wait=true -v=8 --alsologtostderr
E0829 18:49:27.584233    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792909 --wait=true -v=8 --alsologtostderr: (1m22.145467012s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-792909
--- PASS: TestMultiNode/serial/RestartKeepsNodes (104.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 node delete m03
E0829 18:49:44.162325    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-792909 node delete m03: (4.890247545s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-792909 stop: (21.37800154s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792909 status: exit status 7 (88.47133ms)

                                                
                                                
-- stdout --
	multinode-792909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-792909-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr: exit status 7 (84.62262ms)

                                                
                                                
-- stdout --
	multinode-792909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-792909-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:50:06.957004  191232 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:50:06.957135  191232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:50:06.957189  191232 out.go:358] Setting ErrFile to fd 2...
	I0829 18:50:06.957194  191232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:50:06.957460  191232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-2266/.minikube/bin
	I0829 18:50:06.957642  191232 out.go:352] Setting JSON to false
	I0829 18:50:06.957687  191232 mustload.go:65] Loading cluster: multinode-792909
	I0829 18:50:06.957754  191232 notify.go:220] Checking for updates...
	I0829 18:50:06.958642  191232 config.go:182] Loaded profile config "multinode-792909": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:50:06.958666  191232 status.go:255] checking status of multinode-792909 ...
	I0829 18:50:06.959201  191232 cli_runner.go:164] Run: docker container inspect multinode-792909 --format={{.State.Status}}
	I0829 18:50:06.976230  191232 status.go:330] multinode-792909 host status = "Stopped" (err=<nil>)
	I0829 18:50:06.976256  191232 status.go:343] host is not running, skipping remaining checks
	I0829 18:50:06.976264  191232 status.go:257] multinode-792909 status: &{Name:multinode-792909 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:50:06.976342  191232 status.go:255] checking status of multinode-792909-m02 ...
	I0829 18:50:06.976706  191232 cli_runner.go:164] Run: docker container inspect multinode-792909-m02 --format={{.State.Status}}
	I0829 18:50:06.997915  191232 status.go:330] multinode-792909-m02 host status = "Stopped" (err=<nil>)
	I0829 18:50:06.997936  191232 status.go:343] host is not running, skipping remaining checks
	I0829 18:50:06.997944  191232 status.go:257] multinode-792909-m02 status: &{Name:multinode-792909-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792909 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792909 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (54.055078764s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792909 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-792909
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792909-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-792909-m02 --driver=docker  --container-runtime=docker: exit status 14 (81.393567ms)

                                                
                                                
-- stdout --
	* [multinode-792909-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-792909-m02' is duplicated with machine name 'multinode-792909-m02' in profile 'multinode-792909'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792909-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792909-m03 --driver=docker  --container-runtime=docker: (32.559120292s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-792909
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-792909: exit status 80 (430.25185ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-792909 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-792909-m03 already exists in multinode-792909-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-792909-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-792909-m03: (2.030736842s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.15s)

                                                
                                    
x
+
TestPreload (142.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-007048 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-007048 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m43.751211523s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-007048 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-007048 image pull gcr.io/k8s-minikube/busybox: (2.406658506s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-007048
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-007048: (10.903927321s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-007048 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-007048 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.283775939s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-007048 image list
helpers_test.go:175: Cleaning up "test-preload-007048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-007048
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-007048: (2.244371661s)
--- PASS: TestPreload (142.89s)

                                                
                                    
x
+
TestScheduledStopUnix (105.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-989011 --memory=2048 --driver=docker  --container-runtime=docker
E0829 18:54:27.584511    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-989011 --memory=2048 --driver=docker  --container-runtime=docker: (32.196008494s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-989011 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-989011 -n scheduled-stop-989011
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-989011 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-989011 --cancel-scheduled
E0829 18:54:44.161530    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-989011 -n scheduled-stop-989011
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-989011
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-989011 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-989011
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-989011: exit status 7 (67.43311ms)

                                                
                                                
-- stdout --
	scheduled-stop-989011
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-989011 -n scheduled-stop-989011
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-989011 -n scheduled-stop-989011: exit status 7 (73.961278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-989011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-989011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-989011: (1.65021541s)
--- PASS: TestScheduledStopUnix (105.38s)

                                                
                                    
x
+
TestSkaffold (148.21s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe122777052 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-409947 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-409947 --memory=2600 --driver=docker  --container-runtime=docker: (32.200497821s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe122777052 run --minikube-profile skaffold-409947 --kube-context skaffold-409947 --status-check=true --port-forward=false --interactive=false
E0829 18:57:47.232474    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe122777052 run --minikube-profile skaffold-409947 --kube-context skaffold-409947 --status-check=true --port-forward=false --interactive=false: (1m10.705729715s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-77864c84bc-gwmzd" [3c726a5a-7640-459e-9ba4-c4f68aa4f43d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003697852s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-fd9cf8cf7-95cx7" [9ffe65da-1c33-4b0a-b147-50e10a42370b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004109255s
helpers_test.go:175: Cleaning up "skaffold-409947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-409947
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-409947: (2.933916425s)
--- PASS: TestSkaffold (148.21s)

                                                
                                    
x
+
TestInsufficientStorage (11.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-510749 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-510749 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.00869638s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8379b2be-6008-497e-9ec6-4aaea5be186b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-510749] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"531bef4c-79a0-4df3-94a5-98e60afefbd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"a0204147-8c00-40e6-b56d-1ed887ce34d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"da5d79de-409c-42bb-95a2-a30994991e2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig"}}
	{"specversion":"1.0","id":"8ed02d7d-d485-4035-a856-8e2f509776fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube"}}
	{"specversion":"1.0","id":"15285e1b-9833-4752-8c22-098d698b0ea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b90a5a6a-8fc7-468d-a9ff-b49585574176","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4533bcea-0c77-4da1-94ad-1d6c0d829e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2cf34859-7b34-468d-b541-04f668a23cad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"08af5b83-bdc5-4bc6-b712-aae6d343cdde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d388ccaf-4b99-4188-96eb-12fd92c45b4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"66d46d9e-8eaa-45a7-be91-e44455b5b7ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-510749\" primary control-plane node in \"insufficient-storage-510749\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ee9f110-7b35-4aa9-89e4-2eae3fa9ad26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724775115-19521 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2fd45306-6594-4666-a327-740912699bf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b28474f-ac96-4bc4-87e3-2a5e5d4489aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-510749 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-510749 --output=json --layout=cluster: exit status 7 (285.097738ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-510749","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-510749","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:58:26.582079  225567 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-510749" does not appear in /home/jenkins/minikube-integration/19531-2266/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-510749 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-510749 --output=json --layout=cluster: exit status 7 (278.81192ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-510749","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-510749","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:58:26.862477  225630 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-510749" does not appear in /home/jenkins/minikube-integration/19531-2266/kubeconfig
	E0829 18:58:26.872758  225630 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/insufficient-storage-510749/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-510749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-510749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-510749: (1.721158552s)
--- PASS: TestInsufficientStorage (11.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1434878886 start -p running-upgrade-718962 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0829 19:04:25.283317    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:04:27.584269    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:04:44.161705    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1434878886 start -p running-upgrade-718962 --memory=2200 --vm-driver=docker  --container-runtime=docker: (52.741720857s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-718962 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-718962 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.508409356s)
helpers_test.go:175: Cleaning up "running-upgrade-718962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-718962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-718962: (2.133806162s)
--- PASS: TestRunningBinaryUpgrade (94.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (387.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.138903115s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-321739
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-321739: (10.865909619s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-321739 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-321739 status --format={{.Host}}: exit status 7 (68.115295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m41.339136591s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-321739 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (117.904806ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-321739] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-321739
	    minikube start -p kubernetes-upgrade-321739 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3217392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-321739 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-321739 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.610258248s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-321739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-321739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-321739: (2.617201237s)
--- PASS: TestKubernetesUpgrade (387.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (184.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1125991054 start -p missing-upgrade-504324 --memory=2200 --driver=docker  --container-runtime=docker
E0829 18:59:27.583657    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:44.162143    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1125991054 start -p missing-upgrade-504324 --memory=2200 --driver=docker  --container-runtime=docker: (1m50.378589439s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-504324
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-504324: (10.530556077s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-504324
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-504324 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-504324 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m0.880649284s)
helpers_test.go:175: Cleaning up "missing-upgrade-504324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-504324
E0829 19:02:30.649058    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-504324: (2.138503938s)
--- PASS: TestMissingContainerUpgrade (184.90s)

                                                
                                    
x
+
TestPause/serial/Start (56.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-935203 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-935203 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (56.016413406s)
--- PASS: TestPause/serial/Start (56.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-935203 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-935203 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.667626692s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.71s)

                                                
                                    
x
+
TestPause/serial/Pause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-935203 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-935203 --alsologtostderr -v=5: (1.197298564s)
--- PASS: TestPause/serial/Pause (1.20s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-935203 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-935203 --output=json --layout=cluster: exit status 2 (557.509986ms)

                                                
                                                
-- stdout --
	{"Name":"pause-935203","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-935203","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.56s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.21s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-935203 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-935203 --alsologtostderr -v=5: (1.208352744s)
--- PASS: TestPause/serial/Unpause (1.21s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-935203 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-935203 --alsologtostderr -v=5: (1.05692776s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-935203 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-935203 --alsologtostderr -v=5: (2.261760913s)
--- PASS: TestPause/serial/DeletePaused (2.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.12s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-935203
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-935203: exit status 1 (14.695717ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-935203: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.816345597 start -p stopped-upgrade-353182 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0829 19:03:03.344591    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:03.350952    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:03.362282    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:03.383609    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:03.424861    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:03.506235    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:03.667843    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:03.989493    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:04.631534    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:05.913564    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:08.475100    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.816345597 start -p stopped-upgrade-353182 --memory=2200 --vm-driver=docker  --container-runtime=docker: (41.724299242s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.816345597 -p stopped-upgrade-353182 stop
E0829 19:03:13.597337    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:23.839611    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.816345597 -p stopped-upgrade-353182 stop: (11.067750653s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-353182 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0829 19:03:44.322034    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-353182 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.000501077s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-353182
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-353182: (1.454262363s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983490 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-983490 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (82.236635ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-983490] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-2266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-2266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983490 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983490 --driver=docker  --container-runtime=docker: (45.192182677s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-983490 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983490 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983490 --no-kubernetes --driver=docker  --container-runtime=docker: (12.672763812s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-983490 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-983490 status -o json: exit status 2 (510.44263ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-983490","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-983490
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-983490: (1.902370548s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983490 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983490 --no-kubernetes --driver=docker  --container-runtime=docker: (11.002629203s)
--- PASS: TestNoKubernetes/serial/Start (11.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-983490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-983490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (311.752804ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-983490
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-983490: (1.255551341s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-983490 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-983490 --driver=docker  --container-runtime=docker: (8.725920712s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-983490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-983490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.881714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (172.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-410254 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0829 19:08:31.046328    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-410254 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m52.431686807s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (172.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-410254 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2e6de2a6-b367-4c81-ab21-06dac1a208ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2e6de2a6-b367-4c81-ab21-06dac1a208ab] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004778674s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-410254 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-410254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-410254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030751169s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-410254 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-410254 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-410254 --alsologtostderr -v=3: (11.157381539s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-410254 -n old-k8s-version-410254
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-410254 -n old-k8s-version-410254: exit status 7 (182.747538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-410254 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (149.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-410254 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-410254 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m28.664559697s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-410254 -n old-k8s-version-410254
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (149.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-322941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:13:03.342775    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-322941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (58.981603514s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-322941 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bed730fb-077b-4ca5-a930-e08bdaf3095a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bed730fb-077b-4ca5-a930-e08bdaf3095a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.007004594s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-322941 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-322941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-322941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003565459s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-322941 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-322941 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-322941 --alsologtostderr -v=3: (11.070155452s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-322941 -n no-preload-322941
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-322941 -n no-preload-322941: exit status 7 (65.959445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-322941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-322941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-322941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m27.461897566s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-322941 -n no-preload-322941
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vqs4x" [81f7687b-b76d-44e6-9718-609beabd9fc7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00460081s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vqs4x" [81f7687b-b76d-44e6-9718-609beabd9fc7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004397278s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-410254 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-410254 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-410254 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-410254 -n old-k8s-version-410254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-410254 -n old-k8s-version-410254: exit status 2 (307.343163ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-410254 -n old-k8s-version-410254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-410254 -n old-k8s-version-410254: exit status 2 (348.676365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-410254 --alsologtostderr -v=1
E0829 19:14:27.234675    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:27.583487    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-410254 -n old-k8s-version-410254
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-410254 -n old-k8s-version-410254
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-775826 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:14:44.161921    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-775826 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m10.900279701s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-775826 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [23d809d2-0fd2-4551-adb2-72945af7e9a4] Pending
helpers_test.go:344: "busybox" [23d809d2-0fd2-4551-adb2-72945af7e9a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [23d809d2-0fd2-4551-adb2-72945af7e9a4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003919476s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-775826 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-775826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-775826 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-775826 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-775826 --alsologtostderr -v=3: (10.804338196s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-775826 -n embed-certs-775826
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-775826 -n embed-certs-775826: exit status 7 (68.444729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-775826 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-775826 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:16:22.409212    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:22.416767    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:22.428180    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:22.449558    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:22.490921    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:22.572286    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:22.733793    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:23.055476    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:23.696856    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:24.979187    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:27.541034    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:32.663298    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:42.905578    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:03.387927    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:44.349531    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:03.342723    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-775826 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m28.300664234s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-775826 -n embed-certs-775826
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2jxsk" [0fe6e661-31c8-4e02-94fc-5dada158cdc8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004182836s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2jxsk" [0fe6e661-31c8-4e02-94fc-5dada158cdc8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00547959s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-322941 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-322941 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-322941 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-322941 -n no-preload-322941
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-322941 -n no-preload-322941: exit status 2 (362.365526ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-322941 -n no-preload-322941
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-322941 -n no-preload-322941: exit status 2 (327.955003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-322941 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-322941 -n no-preload-322941
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-322941 -n no-preload-322941
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-238639 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:19:06.271171    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:10.653095    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:26.408248    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:27.583981    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:44.162114    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-238639 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m21.271240412s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-238639 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c41b1e1f-f228-4a12-9e70-069f3115b53c] Pending
helpers_test.go:344: "busybox" [c41b1e1f-f228-4a12-9e70-069f3115b53c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c41b1e1f-f228-4a12-9e70-069f3115b53c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.028601046s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-238639 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-238639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-238639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.158970845s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-238639 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-238639 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-238639 --alsologtostderr -v=3: (10.864033097s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639: exit status 7 (71.26464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-238639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-238639 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-238639 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m30.05988356s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639
E0829 19:24:43.426938    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wm4c6" [a31f965e-8170-4a82-b541-d304e0b057bb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003335086s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wm4c6" [a31f965e-8170-4a82-b541-d304e0b057bb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004611609s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-775826 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-775826 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-775826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-775826 -n embed-certs-775826
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-775826 -n embed-certs-775826: exit status 2 (325.868708ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-775826 -n embed-certs-775826
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-775826 -n embed-certs-775826: exit status 2 (425.471257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-775826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-775826 -n embed-certs-775826
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-775826 -n embed-certs-775826
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-420095 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:21:22.409891    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-420095 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (38.746473259s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-420095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-420095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.226347523s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-420095 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-420095 --alsologtostderr -v=3: (5.799640027s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-420095 -n newest-cni-420095
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-420095 -n newest-cni-420095: exit status 7 (70.056892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-420095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-420095 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:21:50.113000    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-420095 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (18.565748801s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-420095 -n newest-cni-420095
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-420095 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-420095 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-420095 -n newest-cni-420095
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-420095 -n newest-cni-420095: exit status 2 (382.742208ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-420095 -n newest-cni-420095
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-420095 -n newest-cni-420095: exit status 2 (477.211752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-420095 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-420095 -n newest-cni-420095
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-420095 -n newest-cni-420095
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0829 19:23:03.342578    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m17.193823503s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-74qfl" [58a3b3d9-3601-4b4d-98e0-125a2891a68d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 19:23:21.488972    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:21.495303    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:21.506660    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:21.527996    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:21.569343    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:21.650912    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:21.813135    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:22.134751    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:22.775989    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-74qfl" [58a3b3d9-3601-4b4d-98e0-125a2891a68d] Running
E0829 19:23:24.057801    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:23:26.619816    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004869402s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0829 19:24:02.465018    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:27.583574    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m8.732591793s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p7qpp" [98f1881f-4e59-4828-9823-adbc848dc366] Running
E0829 19:24:44.161560    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003898445s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p7qpp" [98f1881f-4e59-4828-9823-adbc848dc366] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003500225s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-238639 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-238639 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-238639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639: exit status 2 (373.666674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639: exit status 2 (335.014023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-238639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-238639 -n default-k8s-diff-port-238639
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)
E0829 19:31:07.236841    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:13.528971    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:17.638791    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:17.645297    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:17.656786    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:17.678288    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:17.720409    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:17.801876    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:17.963430    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:18.285341    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:18.927002    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:20.208816    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:20.843894    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/kindnet-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:22.410078    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:22.770894    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:27.892730    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:38.135038    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/calico-900466/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l4chk" [be161a74-a9d1-47b5-a441-02ad096db2a0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00434982s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m15.498780192s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fbt7g" [dfb651bc-b972-4363-bd13-8e8208af018e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fbt7g" [dfb651bc-b972-4363-bd13-8e8208af018e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004783957s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0829 19:26:05.348468    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m4.644234229s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-f9mnx" [a095d4f4-b09d-4000-9fb4-de72fd38ee85] Running
E0829 19:26:22.409982    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/old-k8s-version-410254/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004669158s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rj9hq" [c186be75-700f-48cf-ab2e-8d80bec3150c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rj9hq" [c186be75-700f-48cf-ab2e-8d80bec3150c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004974899s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kr4pc" [8066f188-68cd-4dd7-8bb5-3396af32130e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kr4pc" [8066f188-68cd-4dd7-8bb5-3396af32130e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005494679s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (60.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m0.111455345s)
--- PASS: TestNetworkPlugins/group/false/Start (60.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (52.679565302s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-54lrp" [d1a0ac0a-32c2-4968-bb86-dca9deb1b08f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 19:28:03.343383    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/skaffold-409947/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-54lrp" [d1a0ac0a-32c2-4968-bb86-dca9deb1b08f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.006350579s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mn7j2" [c5055e1e-6f2f-4aab-86fc-27adc5e7697c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 19:28:23.819178    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/auto-900466/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mn7j2" [c5055e1e-6f2f-4aab-86fc-27adc5e7697c] Running
E0829 19:28:28.941196    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/auto-900466/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004781961s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0829 19:28:39.182774    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/auto-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:49.190050    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/no-preload-322941/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m2.02444527s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0829 19:28:59.664408    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/auto-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:27.583727    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/functional-491299/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m28.060227704s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8qbvg" [fb87b4e9-764c-4f0f-a9bd-bb920b0117b9] Running
E0829 19:29:40.626651    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/auto-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:44.161961    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/addons-399511/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006133385s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j4vkh" [da667f2c-ce8a-4c9c-99c0-6491e007c1d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j4vkh" [da667f2c-ce8a-4c9c-99c0-6491e007c1d6] Running
E0829 19:29:51.591761    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:51.598223    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:51.609510    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:51.630892    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:51.672324    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:51.753719    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:51.915278    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:52.236921    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:52.878437    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:54.160720    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:56.722303    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004548438s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (82.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0829 19:30:19.399361    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/kindnet-900466/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-900466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m22.223703956s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (82.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hp2dm" [0fdfc075-0273-40bf-a7f0-81d10e052cb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 19:30:32.567665    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/default-k8s-diff-port-238639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hp2dm" [0fdfc075-0273-40bf-a7f0-81d10e052cb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.007662574s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-900466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-900466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j8nwl" [31bf525f-cee8-45b8-ac99-b9f71fc2b406] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j8nwl" [31bf525f-cee8-45b8-ac99-b9f71fc2b406] Running
E0829 19:31:49.109053    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:49.115501    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:49.126894    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:49.148359    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:49.189788    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:49.271208    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:49.433025    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:49.754840    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:50.396365    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:51.678110    7586 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-2266/.minikube/profiles/custom-flannel-900466/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003919501s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-900466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-900466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-299649 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-299649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-299649
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-852737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-852737
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-900466 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-900466" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-900466

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-900466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900466"

                                                
                                                
----------------------- debugLogs end: cilium-900466 [took: 4.670878735s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-900466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-900466
--- SKIP: TestNetworkPlugins/group/cilium (4.85s)

                                                
                                    
Copied to clipboard