Test Report: Docker_Linux_docker_arm64 20539

                    
                      404431ee24582bacb75d7cfbedbe3aa3f9ffc1a2:2025-03-17:38754
                    
                

Test fail (1/346)

Order failed test Duration
257 TestScheduledStopUnix 36.97
x
+
TestScheduledStopUnix (36.97s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-243584 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-243584 --memory=2048 --driver=docker  --container-runtime=docker: (32.194492428s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-243584 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-243584 -n scheduled-stop-243584
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-243584 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 1327554 running but should have been killed on reschedule of stop
panic.go:631: *** TestScheduledStopUnix FAILED at 2025-03-17 13:51:54.231214131 +0000 UTC m=+2219.191148020
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-243584
helpers_test.go:235: (dbg) docker inspect scheduled-stop-243584:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a4a4e7d230b278a6616dec5b3575d06c04c5a58dfa7bf70f80e01e5d01b244c",
	        "Created": "2025-03-17T13:51:26.414413002Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1324594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T13:51:26.47744243Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:df0c2544fb3106b890f0a9ab81fcf49f97edb092b83e47f42288ad5dfe1f4b40",
	        "ResolvConfPath": "/var/lib/docker/containers/0a4a4e7d230b278a6616dec5b3575d06c04c5a58dfa7bf70f80e01e5d01b244c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a4a4e7d230b278a6616dec5b3575d06c04c5a58dfa7bf70f80e01e5d01b244c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a4a4e7d230b278a6616dec5b3575d06c04c5a58dfa7bf70f80e01e5d01b244c/hosts",
	        "LogPath": "/var/lib/docker/containers/0a4a4e7d230b278a6616dec5b3575d06c04c5a58dfa7bf70f80e01e5d01b244c/0a4a4e7d230b278a6616dec5b3575d06c04c5a58dfa7bf70f80e01e5d01b244c-json.log",
	        "Name": "/scheduled-stop-243584",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "scheduled-stop-243584:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-243584",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0a4a4e7d230b278a6616dec5b3575d06c04c5a58dfa7bf70f80e01e5d01b244c",
	                "LowerDir": "/var/lib/docker/overlay2/b628189b32978c86b31fcf50aff1b4ff565d37c56cd44716d00dd5fb6010c34d-init/diff:/var/lib/docker/overlay2/41521760173e9c0e383fdb1e0e82a24e9241667b8273679076afa7a5eb322b96/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b628189b32978c86b31fcf50aff1b4ff565d37c56cd44716d00dd5fb6010c34d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b628189b32978c86b31fcf50aff1b4ff565d37c56cd44716d00dd5fb6010c34d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b628189b32978c86b31fcf50aff1b4ff565d37c56cd44716d00dd5fb6010c34d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-243584",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-243584/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-243584",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-243584",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-243584",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb6783fb2afdd78c72bee253e8f1a35888eff88c801521a188692a5338b7a7ef",
	            "SandboxKey": "/var/run/docker/netns/eb6783fb2afd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33942"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-243584": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:70:d4:0a:b8:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "07dc9aa7fca7da570fa760917bbf5240191ee1c3e25701324ad7ed7fa1692f54",
	                    "EndpointID": "909cce6b38f9f0db2f963cbdf018b22e9b87edbfd753deb290a0263ba256905a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-243584",
	                        "0a4a4e7d230b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-243584 -n scheduled-stop-243584
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-243584 logs -n 25
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-309686            | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:46 UTC | 17 Mar 25 13:46 UTC |
	| start   | -p multinode-309686            | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:46 UTC | 17 Mar 25 13:47 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-309686       | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:47 UTC |                     |
	| node    | multinode-309686 node delete   | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:47 UTC | 17 Mar 25 13:47 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-309686 stop          | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:47 UTC | 17 Mar 25 13:47 UTC |
	| start   | -p multinode-309686            | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:47 UTC | 17 Mar 25 13:48 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| node    | list -p multinode-309686       | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:48 UTC |                     |
	| start   | -p multinode-309686-m02        | multinode-309686-m02  | jenkins | v1.35.0 | 17 Mar 25 13:48 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| start   | -p multinode-309686-m03        | multinode-309686-m03  | jenkins | v1.35.0 | 17 Mar 25 13:48 UTC | 17 Mar 25 13:49 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| node    | add -p multinode-309686        | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:49 UTC |                     |
	| delete  | -p multinode-309686-m03        | multinode-309686-m03  | jenkins | v1.35.0 | 17 Mar 25 13:49 UTC | 17 Mar 25 13:49 UTC |
	| delete  | -p multinode-309686            | multinode-309686      | jenkins | v1.35.0 | 17 Mar 25 13:49 UTC | 17 Mar 25 13:49 UTC |
	| start   | -p test-preload-700680         | test-preload-700680   | jenkins | v1.35.0 | 17 Mar 25 13:49 UTC | 17 Mar 25 13:50 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-700680 image pull | test-preload-700680   | jenkins | v1.35.0 | 17 Mar 25 13:50 UTC | 17 Mar 25 13:50 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-700680         | test-preload-700680   | jenkins | v1.35.0 | 17 Mar 25 13:50 UTC | 17 Mar 25 13:50 UTC |
	| start   | -p test-preload-700680         | test-preload-700680   | jenkins | v1.35.0 | 17 Mar 25 13:50 UTC | 17 Mar 25 13:51 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| image   | test-preload-700680 image list | test-preload-700680   | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC | 17 Mar 25 13:51 UTC |
	| delete  | -p test-preload-700680         | test-preload-700680   | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC | 17 Mar 25 13:51 UTC |
	| start   | -p scheduled-stop-243584       | scheduled-stop-243584 | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC | 17 Mar 25 13:51 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-243584       | scheduled-stop-243584 | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-243584       | scheduled-stop-243584 | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-243584       | scheduled-stop-243584 | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-243584       | scheduled-stop-243584 | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-243584       | scheduled-stop-243584 | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-243584       | scheduled-stop-243584 | jenkins | v1.35.0 | 17 Mar 25 13:51 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:51:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:51:21.576491 1324214 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:51:21.576601 1324214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:51:21.576604 1324214 out.go:358] Setting ErrFile to fd 2...
	I0317 13:51:21.576608 1324214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:51:21.576868 1324214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:51:21.577246 1324214 out.go:352] Setting JSON to false
	I0317 13:51:21.578108 1324214 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":34432,"bootTime":1742185049,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0317 13:51:21.578159 1324214 start.go:139] virtualization:  
	I0317 13:51:21.581845 1324214 out.go:177] * [scheduled-stop-243584] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0317 13:51:21.586349 1324214 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:51:21.586459 1324214 notify.go:220] Checking for updates...
	I0317 13:51:21.592869 1324214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:51:21.596066 1324214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	I0317 13:51:21.599062 1324214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	I0317 13:51:21.602088 1324214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0317 13:51:21.605013 1324214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:51:21.608141 1324214 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:51:21.631095 1324214 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:51:21.631230 1324214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:51:21.689819 1324214 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-03-17 13:51:21.680744375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:51:21.689911 1324214 docker.go:318] overlay module found
	I0317 13:51:21.693117 1324214 out.go:177] * Using the docker driver based on user configuration
	I0317 13:51:21.695949 1324214 start.go:297] selected driver: docker
	I0317 13:51:21.695960 1324214 start.go:901] validating driver "docker" against <nil>
	I0317 13:51:21.695971 1324214 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:51:21.696675 1324214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:51:21.762007 1324214 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-03-17 13:51:21.753107103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:51:21.762146 1324214 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:51:21.762372 1324214 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 13:51:21.765376 1324214 out.go:177] * Using Docker driver with root privileges
	I0317 13:51:21.768231 1324214 cni.go:84] Creating CNI manager for ""
	I0317 13:51:21.768296 1324214 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:51:21.768304 1324214 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:51:21.768380 1324214 start.go:340] cluster config:
	{Name:scheduled-stop-243584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:scheduled-stop-243584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:51:21.771569 1324214 out.go:177] * Starting "scheduled-stop-243584" primary control-plane node in "scheduled-stop-243584" cluster
	I0317 13:51:21.774410 1324214 cache.go:121] Beginning downloading kic base image for docker with docker
	I0317 13:51:21.777276 1324214 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 13:51:21.780058 1324214 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:51:21.780107 1324214 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
	I0317 13:51:21.780112 1324214 cache.go:56] Caching tarball of preloaded images
	I0317 13:51:21.780145 1324214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 13:51:21.780202 1324214 preload.go:172] Found /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0317 13:51:21.780211 1324214 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 13:51:21.780534 1324214 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/config.json ...
	I0317 13:51:21.780552 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/config.json: {Name:mk752da9b04f855539078e866057e6088edc9e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:21.797906 1324214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 13:51:21.797919 1324214 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 13:51:21.797949 1324214 cache.go:230] Successfully downloaded all kic artifacts
	I0317 13:51:21.797980 1324214 start.go:360] acquireMachinesLock for scheduled-stop-243584: {Name:mk1910180da4c9169938285321ea1aa7f80b087b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:51:21.798097 1324214 start.go:364] duration metric: took 102.047µs to acquireMachinesLock for "scheduled-stop-243584"
	I0317 13:51:21.798123 1324214 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-243584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:scheduled-stop-243584 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:51:21.798198 1324214 start.go:125] createHost starting for "" (driver="docker")
	I0317 13:51:21.801558 1324214 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0317 13:51:21.801853 1324214 start.go:159] libmachine.API.Create for "scheduled-stop-243584" (driver="docker")
	I0317 13:51:21.801885 1324214 client.go:168] LocalClient.Create starting
	I0317 13:51:21.801957 1324214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca.pem
	I0317 13:51:21.801990 1324214 main.go:141] libmachine: Decoding PEM data...
	I0317 13:51:21.802001 1324214 main.go:141] libmachine: Parsing certificate...
	I0317 13:51:21.802053 1324214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/cert.pem
	I0317 13:51:21.802068 1324214 main.go:141] libmachine: Decoding PEM data...
	I0317 13:51:21.802079 1324214 main.go:141] libmachine: Parsing certificate...
	I0317 13:51:21.802431 1324214 cli_runner.go:164] Run: docker network inspect scheduled-stop-243584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 13:51:21.819197 1324214 cli_runner.go:211] docker network inspect scheduled-stop-243584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 13:51:21.819264 1324214 network_create.go:284] running [docker network inspect scheduled-stop-243584] to gather additional debugging logs...
	I0317 13:51:21.819279 1324214 cli_runner.go:164] Run: docker network inspect scheduled-stop-243584
	W0317 13:51:21.835437 1324214 cli_runner.go:211] docker network inspect scheduled-stop-243584 returned with exit code 1
	I0317 13:51:21.835457 1324214 network_create.go:287] error running [docker network inspect scheduled-stop-243584]: docker network inspect scheduled-stop-243584: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-243584 not found
	I0317 13:51:21.835470 1324214 network_create.go:289] output of [docker network inspect scheduled-stop-243584]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-243584 not found
	
	** /stderr **
	I0317 13:51:21.835593 1324214 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 13:51:21.851543 1324214 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-509972d2f15a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:ee:dc:aa:ea:d5} reservation:<nil>}
	I0317 13:51:21.851903 1324214 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c97a9322feda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:dc:3b:ec:43:f8} reservation:<nil>}
	I0317 13:51:21.852164 1324214 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4002cd7e09a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:23:b7:f0:11:bd} reservation:<nil>}
	I0317 13:51:21.852535 1324214 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400188fed0}
	I0317 13:51:21.852553 1324214 network_create.go:124] attempt to create docker network scheduled-stop-243584 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0317 13:51:21.852611 1324214 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-243584 scheduled-stop-243584
	I0317 13:51:21.908358 1324214 network_create.go:108] docker network scheduled-stop-243584 192.168.76.0/24 created
	I0317 13:51:21.908382 1324214 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-243584" container
	I0317 13:51:21.908504 1324214 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 13:51:21.923325 1324214 cli_runner.go:164] Run: docker volume create scheduled-stop-243584 --label name.minikube.sigs.k8s.io=scheduled-stop-243584 --label created_by.minikube.sigs.k8s.io=true
	I0317 13:51:21.941649 1324214 oci.go:103] Successfully created a docker volume scheduled-stop-243584
	I0317 13:51:21.941748 1324214 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-243584-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-243584 --entrypoint /usr/bin/test -v scheduled-stop-243584:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 13:51:22.460606 1324214 oci.go:107] Successfully prepared a docker volume scheduled-stop-243584
	I0317 13:51:22.460651 1324214 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:51:22.460669 1324214 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 13:51:22.460746 1324214 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-243584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 13:51:26.345905 1324214 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-243584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (3.88512338s)
	I0317 13:51:26.345933 1324214 kic.go:203] duration metric: took 3.885260151s to extract preloaded images to volume ...
	W0317 13:51:26.346077 1324214 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 13:51:26.346178 1324214 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 13:51:26.398909 1324214 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-243584 --name scheduled-stop-243584 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-243584 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-243584 --network scheduled-stop-243584 --ip 192.168.76.2 --volume scheduled-stop-243584:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 13:51:26.682078 1324214 cli_runner.go:164] Run: docker container inspect scheduled-stop-243584 --format={{.State.Running}}
	I0317 13:51:26.704574 1324214 cli_runner.go:164] Run: docker container inspect scheduled-stop-243584 --format={{.State.Status}}
	I0317 13:51:26.731362 1324214 cli_runner.go:164] Run: docker exec scheduled-stop-243584 stat /var/lib/dpkg/alternatives/iptables
	I0317 13:51:26.781070 1324214 oci.go:144] the created container "scheduled-stop-243584" has a running status.
	I0317 13:51:26.781098 1324214 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa...
	I0317 13:51:27.516125 1324214 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 13:51:27.552514 1324214 cli_runner.go:164] Run: docker container inspect scheduled-stop-243584 --format={{.State.Status}}
	I0317 13:51:27.575048 1324214 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 13:51:27.575060 1324214 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-243584 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 13:51:27.637071 1324214 cli_runner.go:164] Run: docker container inspect scheduled-stop-243584 --format={{.State.Status}}
	I0317 13:51:27.661135 1324214 machine.go:93] provisionDockerMachine start ...
	I0317 13:51:27.661216 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:27.685153 1324214 main.go:141] libmachine: Using SSH client type: native
	I0317 13:51:27.685531 1324214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I0317 13:51:27.685540 1324214 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:51:27.823374 1324214 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-243584
	
	I0317 13:51:27.823388 1324214 ubuntu.go:169] provisioning hostname "scheduled-stop-243584"
	I0317 13:51:27.823450 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:27.841943 1324214 main.go:141] libmachine: Using SSH client type: native
	I0317 13:51:27.842278 1324214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I0317 13:51:27.842288 1324214 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-243584 && echo "scheduled-stop-243584" | sudo tee /etc/hostname
	I0317 13:51:27.984664 1324214 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-243584
	
	I0317 13:51:27.984732 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:28.006686 1324214 main.go:141] libmachine: Using SSH client type: native
	I0317 13:51:28.007007 1324214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I0317 13:51:28.007023 1324214 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-243584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-243584/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-243584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:51:28.132013 1324214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:51:28.132028 1324214 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20539-1115410/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-1115410/.minikube}
	I0317 13:51:28.132045 1324214 ubuntu.go:177] setting up certificates
	I0317 13:51:28.132054 1324214 provision.go:84] configureAuth start
	I0317 13:51:28.132111 1324214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-243584
	I0317 13:51:28.150525 1324214 provision.go:143] copyHostCerts
	I0317 13:51:28.150587 1324214 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-1115410/.minikube/ca.pem, removing ...
	I0317 13:51:28.150595 1324214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-1115410/.minikube/ca.pem
	I0317 13:51:28.150672 1324214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-1115410/.minikube/ca.pem (1082 bytes)
	I0317 13:51:28.150766 1324214 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-1115410/.minikube/cert.pem, removing ...
	I0317 13:51:28.150770 1324214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-1115410/.minikube/cert.pem
	I0317 13:51:28.150794 1324214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-1115410/.minikube/cert.pem (1123 bytes)
	I0317 13:51:28.150868 1324214 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-1115410/.minikube/key.pem, removing ...
	I0317 13:51:28.150872 1324214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-1115410/.minikube/key.pem
	I0317 13:51:28.150901 1324214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-1115410/.minikube/key.pem (1675 bytes)
	I0317 13:51:28.150952 1324214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-243584 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-243584]
	I0317 13:51:28.455009 1324214 provision.go:177] copyRemoteCerts
	I0317 13:51:28.455068 1324214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:51:28.455108 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:28.472247 1324214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa Username:docker}
	I0317 13:51:28.564963 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:51:28.591329 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0317 13:51:28.615429 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:51:28.640344 1324214 provision.go:87] duration metric: took 508.275605ms to configureAuth
	I0317 13:51:28.640363 1324214 ubuntu.go:193] setting minikube options for container-runtime
	I0317 13:51:28.640550 1324214 config.go:182] Loaded profile config "scheduled-stop-243584": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:51:28.640607 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:28.657948 1324214 main.go:141] libmachine: Using SSH client type: native
	I0317 13:51:28.658254 1324214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I0317 13:51:28.658261 1324214 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 13:51:28.780311 1324214 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0317 13:51:28.780322 1324214 ubuntu.go:71] root file system type: overlay
	I0317 13:51:28.780428 1324214 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 13:51:28.780494 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:28.798198 1324214 main.go:141] libmachine: Using SSH client type: native
	I0317 13:51:28.798503 1324214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I0317 13:51:28.798577 1324214 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 13:51:28.932110 1324214 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 13:51:28.932185 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:28.950033 1324214 main.go:141] libmachine: Using SSH client type: native
	I0317 13:51:28.950345 1324214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I0317 13:51:28.950364 1324214 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 13:51:29.740332 1324214 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-02-26 10:39:24.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-03-17 13:51:28.926136675 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0317 13:51:29.740354 1324214 machine.go:96] duration metric: took 2.079207247s to provisionDockerMachine
	I0317 13:51:29.740363 1324214 client.go:171] duration metric: took 7.938474199s to LocalClient.Create
	I0317 13:51:29.740375 1324214 start.go:167] duration metric: took 7.938523314s to libmachine.API.Create "scheduled-stop-243584"
	I0317 13:51:29.740381 1324214 start.go:293] postStartSetup for "scheduled-stop-243584" (driver="docker")
	I0317 13:51:29.740389 1324214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:51:29.740447 1324214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:51:29.740499 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:29.758732 1324214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa Username:docker}
	I0317 13:51:29.853338 1324214 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:51:29.856471 1324214 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 13:51:29.856492 1324214 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 13:51:29.856505 1324214 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 13:51:29.856511 1324214 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 13:51:29.856519 1324214 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-1115410/.minikube/addons for local assets ...
	I0317 13:51:29.856581 1324214 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-1115410/.minikube/files for local assets ...
	I0317 13:51:29.856661 1324214 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-1115410/.minikube/files/etc/ssl/certs/11207312.pem -> 11207312.pem in /etc/ssl/certs
	I0317 13:51:29.856769 1324214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:51:29.865418 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/files/etc/ssl/certs/11207312.pem --> /etc/ssl/certs/11207312.pem (1708 bytes)
	I0317 13:51:29.889809 1324214 start.go:296] duration metric: took 149.414444ms for postStartSetup
	I0317 13:51:29.890218 1324214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-243584
	I0317 13:51:29.907137 1324214 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/config.json ...
	I0317 13:51:29.907410 1324214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:51:29.907448 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:29.924253 1324214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa Username:docker}
	I0317 13:51:30.022605 1324214 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 13:51:30.028737 1324214 start.go:128] duration metric: took 8.230523187s to createHost
	I0317 13:51:30.028754 1324214 start.go:83] releasing machines lock for "scheduled-stop-243584", held for 8.230649185s
	I0317 13:51:30.028839 1324214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-243584
	I0317 13:51:30.050966 1324214 ssh_runner.go:195] Run: cat /version.json
	I0317 13:51:30.051013 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:30.051331 1324214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:51:30.051400 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:30.073997 1324214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa Username:docker}
	I0317 13:51:30.092577 1324214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa Username:docker}
	I0317 13:51:30.163583 1324214 ssh_runner.go:195] Run: systemctl --version
	I0317 13:51:30.313164 1324214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 13:51:30.317879 1324214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 13:51:30.347620 1324214 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 13:51:30.347696 1324214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:51:30.381512 1324214 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 13:51:30.381530 1324214 start.go:495] detecting cgroup driver to use...
	I0317 13:51:30.381562 1324214 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 13:51:30.381660 1324214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:51:30.398513 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 13:51:30.408654 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 13:51:30.418677 1324214 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 13:51:30.418738 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 13:51:30.429028 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:51:30.439223 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 13:51:30.449620 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:51:30.459668 1324214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:51:30.469333 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 13:51:30.479954 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 13:51:30.490531 1324214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 13:51:30.501148 1324214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:51:30.510303 1324214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:51:30.519365 1324214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:51:30.613597 1324214 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 13:51:30.717924 1324214 start.go:495] detecting cgroup driver to use...
	I0317 13:51:30.717971 1324214 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 13:51:30.718017 1324214 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 13:51:30.738767 1324214 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0317 13:51:30.738826 1324214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:51:30.751495 1324214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:51:30.769602 1324214 ssh_runner.go:195] Run: which cri-dockerd
	I0317 13:51:30.778140 1324214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 13:51:30.792169 1324214 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 13:51:30.821256 1324214 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 13:51:30.927974 1324214 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 13:51:31.030664 1324214 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 13:51:31.030776 1324214 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 13:51:31.050211 1324214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:51:31.154094 1324214 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:51:31.445496 1324214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 13:51:31.457399 1324214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:51:31.469165 1324214 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 13:51:31.561952 1324214 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 13:51:31.645335 1324214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:51:31.736617 1324214 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 13:51:31.750511 1324214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:51:31.762062 1324214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:51:31.852662 1324214 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 13:51:31.923255 1324214 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 13:51:31.923329 1324214 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 13:51:31.927189 1324214 start.go:563] Will wait 60s for crictl version
	I0317 13:51:31.927257 1324214 ssh_runner.go:195] Run: which crictl
	I0317 13:51:31.930792 1324214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:51:31.971418 1324214 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.0.1
	RuntimeApiVersion:  v1
	I0317 13:51:31.971485 1324214 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:51:31.993482 1324214 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:51:32.020526 1324214 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.1 ...
	I0317 13:51:32.020654 1324214 cli_runner.go:164] Run: docker network inspect scheduled-stop-243584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 13:51:32.037621 1324214 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0317 13:51:32.041598 1324214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:51:32.052925 1324214 kubeadm.go:883] updating cluster {Name:scheduled-stop-243584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:scheduled-stop-243584 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:51:32.053022 1324214 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:51:32.053079 1324214 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:51:32.073679 1324214 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:51:32.073692 1324214 docker.go:619] Images already preloaded, skipping extraction
	I0317 13:51:32.073757 1324214 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:51:32.094443 1324214 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:51:32.094458 1324214 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:51:32.094466 1324214 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 docker true true} ...
	I0317 13:51:32.094564 1324214 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-243584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:scheduled-stop-243584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:51:32.094639 1324214 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 13:51:32.142792 1324214 cni.go:84] Creating CNI manager for ""
	I0317 13:51:32.142808 1324214 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:51:32.142817 1324214 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:51:32.142835 1324214 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-243584 NodeName:scheduled-stop-243584 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:51:32.142974 1324214 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "scheduled-stop-243584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:51:32.143035 1324214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:51:32.151672 1324214 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:51:32.151736 1324214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:51:32.160361 1324214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0317 13:51:32.179410 1324214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:51:32.196748 1324214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I0317 13:51:32.213979 1324214 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0317 13:51:32.217396 1324214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:51:32.228110 1324214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:51:32.316662 1324214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:51:32.336322 1324214 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584 for IP: 192.168.76.2
	I0317 13:51:32.336344 1324214 certs.go:194] generating shared ca certs ...
	I0317 13:51:32.336367 1324214 certs.go:226] acquiring lock for ca certs: {Name:mka2aadc5dbaa2e5043414215576d5f76d3f10d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:32.336523 1324214 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/ca.key
	I0317 13:51:32.336564 1324214 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/proxy-client-ca.key
	I0317 13:51:32.336570 1324214 certs.go:256] generating profile certs ...
	I0317 13:51:32.336632 1324214 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/client.key
	I0317 13:51:32.336641 1324214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/client.crt with IP's: []
	I0317 13:51:32.849118 1324214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/client.crt ...
	I0317 13:51:32.849134 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/client.crt: {Name:mkbf2ca1b10019250d7eeaddcc9c6c18f9e475c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:32.849341 1324214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/client.key ...
	I0317 13:51:32.849349 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/client.key: {Name:mk2b3394b22eed346a9291a687219a1ae46bd410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:32.849454 1324214 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.key.b467935d
	I0317 13:51:32.849467 1324214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.crt.b467935d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0317 13:51:33.457781 1324214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.crt.b467935d ...
	I0317 13:51:33.457797 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.crt.b467935d: {Name:mk20c43b27a7877cc5918c8eba64d07380adb4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:33.457988 1324214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.key.b467935d ...
	I0317 13:51:33.457996 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.key.b467935d: {Name:mk7f366e33721b3533bf134e1069d21f787e7151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:33.458087 1324214 certs.go:381] copying /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.crt.b467935d -> /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.crt
	I0317 13:51:33.458164 1324214 certs.go:385] copying /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.key.b467935d -> /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.key
	I0317 13:51:33.458218 1324214 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.key
	I0317 13:51:33.458229 1324214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.crt with IP's: []
	I0317 13:51:34.497070 1324214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.crt ...
	I0317 13:51:34.497086 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.crt: {Name:mk0827558a7eca62660222e8c745162089a25ff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:34.497279 1324214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.key ...
	I0317 13:51:34.497288 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.key: {Name:mk0c81c8e1b5740582737fbd02a324c4dcb9d86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:34.497472 1324214 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/1120731.pem (1338 bytes)
	W0317 13:51:34.497515 1324214 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/1120731_empty.pem, impossibly tiny 0 bytes
	I0317 13:51:34.497522 1324214 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:51:34.497545 1324214 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:51:34.497565 1324214 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:51:34.497588 1324214 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/key.pem (1675 bytes)
	I0317 13:51:34.497627 1324214 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-1115410/.minikube/files/etc/ssl/certs/11207312.pem (1708 bytes)
	I0317 13:51:34.498194 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:51:34.528846 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0317 13:51:34.556332 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:51:34.582782 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 13:51:34.607100 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0317 13:51:34.632616 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:51:34.659222 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:51:34.684892 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/scheduled-stop-243584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:51:34.709453 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:51:34.736915 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/certs/1120731.pem --> /usr/share/ca-certificates/1120731.pem (1338 bytes)
	I0317 13:51:34.762267 1324214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-1115410/.minikube/files/etc/ssl/certs/11207312.pem --> /usr/share/ca-certificates/11207312.pem (1708 bytes)
	I0317 13:51:34.787339 1324214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:51:34.805849 1324214 ssh_runner.go:195] Run: openssl version
	I0317 13:51:34.811538 1324214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:51:34.821384 1324214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:51:34.824790 1324214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 13:15 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:51:34.824842 1324214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:51:34.831715 1324214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:51:34.841105 1324214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120731.pem && ln -fs /usr/share/ca-certificates/1120731.pem /etc/ssl/certs/1120731.pem"
	I0317 13:51:34.850674 1324214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120731.pem
	I0317 13:51:34.854278 1324214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 13:22 /usr/share/ca-certificates/1120731.pem
	I0317 13:51:34.854332 1324214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120731.pem
	I0317 13:51:34.861470 1324214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120731.pem /etc/ssl/certs/51391683.0"
	I0317 13:51:34.871142 1324214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11207312.pem && ln -fs /usr/share/ca-certificates/11207312.pem /etc/ssl/certs/11207312.pem"
	I0317 13:51:34.880691 1324214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11207312.pem
	I0317 13:51:34.884052 1324214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 13:22 /usr/share/ca-certificates/11207312.pem
	I0317 13:51:34.884106 1324214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11207312.pem
	I0317 13:51:34.891159 1324214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11207312.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:51:34.900637 1324214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:51:34.903934 1324214 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:51:34.903977 1324214 kubeadm.go:392] StartCluster: {Name:scheduled-stop-243584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:scheduled-stop-243584 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:51:34.904091 1324214 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:51:34.922030 1324214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:51:34.930991 1324214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:51:34.939482 1324214 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 13:51:34.939533 1324214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:51:34.948095 1324214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:51:34.948105 1324214 kubeadm.go:157] found existing configuration files:
	
	I0317 13:51:34.948161 1324214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:51:34.956645 1324214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:51:34.956705 1324214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:51:34.965156 1324214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:51:34.974062 1324214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:51:34.974135 1324214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:51:34.982816 1324214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:51:34.991898 1324214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:51:34.991953 1324214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:51:35.001929 1324214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:51:35.014325 1324214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:51:35.014399 1324214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:51:35.023367 1324214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 13:51:35.062377 1324214 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:51:35.062459 1324214 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:51:35.084822 1324214 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 13:51:35.084887 1324214 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1077-aws
	I0317 13:51:35.084921 1324214 kubeadm.go:310] OS: Linux
	I0317 13:51:35.084965 1324214 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 13:51:35.085012 1324214 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 13:51:35.085073 1324214 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 13:51:35.085119 1324214 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 13:51:35.085165 1324214 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 13:51:35.085213 1324214 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 13:51:35.085256 1324214 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 13:51:35.085303 1324214 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 13:51:35.085348 1324214 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 13:51:35.145583 1324214 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:51:35.145697 1324214 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:51:35.145804 1324214 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:51:35.160380 1324214 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:51:35.167031 1324214 out.go:235]   - Generating certificates and keys ...
	I0317 13:51:35.167159 1324214 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:51:35.167239 1324214 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:51:35.554429 1324214 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:51:36.128450 1324214 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:51:36.314424 1324214 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:51:36.918377 1324214 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:51:37.695880 1324214 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:51:37.696247 1324214 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-243584] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 13:51:38.271257 1324214 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:51:38.271406 1324214 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-243584] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 13:51:39.036718 1324214 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:51:39.256822 1324214 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:51:39.678292 1324214 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:51:39.678516 1324214 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:51:40.392366 1324214 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:51:40.788525 1324214 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:51:41.033424 1324214 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:51:41.507347 1324214 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:51:41.925392 1324214 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:51:41.926134 1324214 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:51:41.929088 1324214 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:51:41.932757 1324214 out.go:235]   - Booting up control plane ...
	I0317 13:51:41.932873 1324214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:51:41.932951 1324214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:51:41.933020 1324214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:51:41.958526 1324214 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:51:41.965886 1324214 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:51:41.965933 1324214 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:51:42.087287 1324214 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:51:42.087401 1324214 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:51:43.582129 1324214 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501652099s
	I0317 13:51:43.582209 1324214 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:51:50.583775 1324214 kubeadm.go:310] [api-check] The API server is healthy after 7.001817229s
	I0317 13:51:50.606333 1324214 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:51:50.622604 1324214 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:51:50.655318 1324214 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:51:50.655513 1324214 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-243584 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:51:50.667246 1324214 kubeadm.go:310] [bootstrap-token] Using token: fmrs1x.hfr7ixeahmehc3u1
	I0317 13:51:50.670114 1324214 out.go:235]   - Configuring RBAC rules ...
	I0317 13:51:50.670262 1324214 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:51:50.674808 1324214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:51:50.683043 1324214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:51:50.689033 1324214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:51:50.695900 1324214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:51:50.699847 1324214 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:51:50.990798 1324214 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:51:51.424043 1324214 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:51:51.992695 1324214 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:51:51.994146 1324214 kubeadm.go:310] 
	I0317 13:51:51.994226 1324214 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:51:51.994230 1324214 kubeadm.go:310] 
	I0317 13:51:51.994305 1324214 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:51:51.994308 1324214 kubeadm.go:310] 
	I0317 13:51:51.994333 1324214 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:51:51.994390 1324214 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:51:51.994439 1324214 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:51:51.994442 1324214 kubeadm.go:310] 
	I0317 13:51:51.994498 1324214 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:51:51.994501 1324214 kubeadm.go:310] 
	I0317 13:51:51.994548 1324214 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:51:51.994553 1324214 kubeadm.go:310] 
	I0317 13:51:51.994603 1324214 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:51:51.994690 1324214 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:51:51.994757 1324214 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:51:51.994760 1324214 kubeadm.go:310] 
	I0317 13:51:51.994850 1324214 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:51:51.994925 1324214 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:51:51.994928 1324214 kubeadm.go:310] 
	I0317 13:51:51.995010 1324214 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fmrs1x.hfr7ixeahmehc3u1 \
	I0317 13:51:51.995111 1324214 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:af4d80408e90de1964e97f63a0ed1bbfbf4eafdde4782d2526b5a4753a70e86a \
	I0317 13:51:51.995131 1324214 kubeadm.go:310] 	--control-plane 
	I0317 13:51:51.995134 1324214 kubeadm.go:310] 
	I0317 13:51:51.995217 1324214 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:51:51.995221 1324214 kubeadm.go:310] 
	I0317 13:51:51.995301 1324214 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fmrs1x.hfr7ixeahmehc3u1 \
	I0317 13:51:51.995401 1324214 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:af4d80408e90de1964e97f63a0ed1bbfbf4eafdde4782d2526b5a4753a70e86a 
	I0317 13:51:52.002051 1324214 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 13:51:52.002280 1324214 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1077-aws\n", err: exit status 1
	I0317 13:51:52.002488 1324214 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:51:52.002514 1324214 cni.go:84] Creating CNI manager for ""
	I0317 13:51:52.002528 1324214 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:51:52.005902 1324214 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:51:52.008706 1324214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:51:52.026077 1324214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:51:52.047041 1324214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:51:52.047155 1324214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:51:52.047237 1324214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-243584 minikube.k8s.io/updated_at=2025_03_17T13_51_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=scheduled-stop-243584 minikube.k8s.io/primary=true
	I0317 13:51:52.205444 1324214 ops.go:34] apiserver oom_adj: -16
	I0317 13:51:52.205471 1324214 kubeadm.go:1113] duration metric: took 158.3583ms to wait for elevateKubeSystemPrivileges
	I0317 13:51:52.205490 1324214 kubeadm.go:394] duration metric: took 17.301517013s to StartCluster
	I0317 13:51:52.205506 1324214 settings.go:142] acquiring lock: {Name:mke49e242edc3285f205f5787b107a2dac6376eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:52.205579 1324214 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-1115410/kubeconfig
	I0317 13:51:52.206210 1324214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/kubeconfig: {Name:mkd7b3f1599a993f1ecc89c150f1c90959e7d444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:51:52.206413 1324214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 13:51:52.206432 1324214 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:51:52.206667 1324214 config.go:182] Loaded profile config "scheduled-stop-243584": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:51:52.206708 1324214 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:51:52.206772 1324214 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-243584"
	I0317 13:51:52.206783 1324214 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-243584"
	I0317 13:51:52.206805 1324214 host.go:66] Checking if "scheduled-stop-243584" exists ...
	I0317 13:51:52.207265 1324214 cli_runner.go:164] Run: docker container inspect scheduled-stop-243584 --format={{.State.Status}}
	I0317 13:51:52.207898 1324214 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-243584"
	I0317 13:51:52.207912 1324214 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-243584"
	I0317 13:51:52.208187 1324214 cli_runner.go:164] Run: docker container inspect scheduled-stop-243584 --format={{.State.Status}}
	I0317 13:51:52.210482 1324214 out.go:177] * Verifying Kubernetes components...
	I0317 13:51:52.213528 1324214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:51:52.251152 1324214 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-243584"
	I0317 13:51:52.251180 1324214 host.go:66] Checking if "scheduled-stop-243584" exists ...
	I0317 13:51:52.251600 1324214 cli_runner.go:164] Run: docker container inspect scheduled-stop-243584 --format={{.State.Status}}
	I0317 13:51:52.251754 1324214 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:51:52.254756 1324214 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:51:52.254767 1324214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:51:52.254917 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:52.293671 1324214 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:51:52.293696 1324214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:51:52.293768 1324214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-243584
	I0317 13:51:52.299009 1324214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa Username:docker}
	I0317 13:51:52.336933 1324214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/scheduled-stop-243584/id_rsa Username:docker}
	I0317 13:51:52.537701 1324214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:51:52.542705 1324214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 13:51:52.542846 1324214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:51:52.599630 1324214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:51:53.128443 1324214 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:51:53.128494 1324214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:51:53.128577 1324214 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0317 13:51:53.155238 1324214 api_server.go:72] duration metric: took 948.781109ms to wait for apiserver process to appear ...
	I0317 13:51:53.155250 1324214 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:51:53.155300 1324214 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0317 13:51:53.168942 1324214 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0317 13:51:53.170158 1324214 api_server.go:141] control plane version: v1.32.2
	I0317 13:51:53.170174 1324214 api_server.go:131] duration metric: took 14.907174ms to wait for apiserver health ...
	I0317 13:51:53.170181 1324214 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:51:53.177719 1324214 system_pods.go:59] 5 kube-system pods found
	I0317 13:51:53.177778 1324214 system_pods.go:61] "etcd-scheduled-stop-243584" [7ee18146-7cb0-4e3b-8eb9-c42c4c205ea3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:51:53.177786 1324214 system_pods.go:61] "kube-apiserver-scheduled-stop-243584" [03f29911-b26a-4d65-a5bf-6ae8a7888242] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:51:53.177796 1324214 system_pods.go:61] "kube-controller-manager-scheduled-stop-243584" [0ab2542e-9a6f-4666-b4e7-11df1dc4407c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:51:53.177810 1324214 system_pods.go:61] "kube-scheduler-scheduled-stop-243584" [be96d5fe-6afd-4d20-8fd3-28ca64f420d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:51:53.177815 1324214 system_pods.go:61] "storage-provisioner" [4ad6664f-e005-48c7-a2ad-08c65444f76e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0317 13:51:53.177819 1324214 system_pods.go:74] duration metric: took 7.634354ms to wait for pod list to return data ...
	I0317 13:51:53.177829 1324214 kubeadm.go:582] duration metric: took 971.3773ms to wait for: map[apiserver:true system_pods:true]
	I0317 13:51:53.177840 1324214 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:51:53.179515 1324214 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 13:51:53.180719 1324214 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0317 13:51:53.180737 1324214 node_conditions.go:123] node cpu capacity is 2
	I0317 13:51:53.180747 1324214 node_conditions.go:105] duration metric: took 2.90347ms to run NodePressure ...
	I0317 13:51:53.180758 1324214 start.go:241] waiting for startup goroutines ...
	I0317 13:51:53.182690 1324214 addons.go:514] duration metric: took 975.979019ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 13:51:53.632541 1324214 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-243584" context rescaled to 1 replicas
	I0317 13:51:53.632571 1324214 start.go:246] waiting for cluster config update ...
	I0317 13:51:53.632583 1324214 start.go:255] writing updated cluster config ...
	I0317 13:51:53.632874 1324214 ssh_runner.go:195] Run: rm -f paused
	I0317 13:51:53.703919 1324214 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:51:53.707274 1324214 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-243584" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.240629990Z" level=info msg="Loading containers: start."
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.409818172Z" level=info msg="Loading containers: done."
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.419948555Z" level=info msg="Docker daemon" commit=bbd0a17 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.420034216Z" level=info msg="Initializing buildkit"
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.434828333Z" level=info msg="Completed buildkit initialization"
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.443140739Z" level=info msg="Daemon has completed initialization"
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.443354711Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 17 13:51:31 scheduled-stop-243584 dockerd[1348]: time="2025-03-17T13:51:31.443540499Z" level=info msg="API listen on [::]:2376"
	Mar 17 13:51:31 scheduled-stop-243584 systemd[1]: Started Docker Application Container Engine.
	Mar 17 13:51:31 scheduled-stop-243584 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Start docker client with request timeout 0s"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Loaded network plugin cni"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Mar 17 13:51:31 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:31Z" level=info msg="Start cri-dockerd grpc backend"
	Mar 17 13:51:31 scheduled-stop-243584 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Mar 17 13:51:43 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9fbeab80d02e830c7ddc50c52bdd2f90d7921efa2197f0911974db02a6c8dee0/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Mar 17 13:51:44 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/07a0a7263d87c5344a1c9157ef02dafb0bebfc4cd942663552904532e851f9bd/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Mar 17 13:51:44 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0bea48ed5d84b8c53a7df3fed3a489ab052f44df7c7626fce1a18ff5869b9a6c/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Mar 17 13:51:44 scheduled-stop-243584 cri-dockerd[1629]: time="2025-03-17T13:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/37a821cf2cac29e435c7c4e9d292af7e2101ea42103fe66d3a2af681a79116d2/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	81d46e6721573       7fc9d4aa817aa       11 seconds ago      Running             etcd                      0                   37a821cf2cac2       etcd-scheduled-stop-243584
	3ad036ce2cfb7       82dfa03f692fb       11 seconds ago      Running             kube-scheduler            0                   0bea48ed5d84b       kube-scheduler-scheduled-stop-243584
	a2e69c3351900       3c9285acfd2ff       11 seconds ago      Running             kube-controller-manager   0                   07a0a7263d87c       kube-controller-manager-scheduled-stop-243584
	a926595c50419       6417e1437b6d9       12 seconds ago      Running             kube-apiserver            0                   9fbeab80d02e8       kube-apiserver-scheduled-stop-243584
	
	
	==> describe nodes <==
	Name:               scheduled-stop-243584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-243584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=scheduled-stop-243584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T13_51_52_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 13:51:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-243584
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 13:51:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 13:51:48 +0000   Mon, 17 Mar 2025 13:51:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 13:51:48 +0000   Mon, 17 Mar 2025 13:51:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 13:51:48 +0000   Mon, 17 Mar 2025 13:51:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 13:51:48 +0000   Mon, 17 Mar 2025 13:51:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-243584
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 84e8daba3db2458990b6f8d5f3e476ab
	  System UUID:                f148d108-2187-4838-8f29-d6fb2835fd93
	  Boot ID:                    181457f8-a248-4acf-a09f-ef4fd7d5bbae
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.0.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (5 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-243584                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6s
	  kube-system                 kube-apiserver-scheduled-stop-243584             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-243584    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-scheduler-scheduled-stop-243584             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 storage-provisioner                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 4s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 4s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4s    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s    kubelet          Node scheduled-stop-243584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s    kubelet          Node scheduled-stop-243584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s    kubelet          Node scheduled-stop-243584 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           0s    node-controller  Node scheduled-stop-243584 event: Registered Node scheduled-stop-243584 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [81d46e672157] <==
	{"level":"info","ts":"2025-03-17T13:51:44.714837Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T13:51:44.715128Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-03-17T13:51:44.715289Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-03-17T13:51:44.699604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-03-17T13:51:44.718372Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-03-17T13:51:45.661457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-03-17T13:51:45.661513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-03-17T13:51:45.661531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-03-17T13:51:45.661548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-03-17T13:51:45.661556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-03-17T13:51:45.661567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-03-17T13:51:45.661575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-03-17T13:51:45.665422Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:scheduled-stop-243584 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T13:51:45.665605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:51:45.665756Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:51:45.665605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:51:45.668663Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:51:45.674899Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T13:51:45.675110Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T13:51:45.675282Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:51:45.675460Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:51:45.675613Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:51:45.675925Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:51:45.676478Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:51:45.687750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 13:51:55 up  9:34,  0 users,  load average: 3.54, 2.49, 2.61
	Linux scheduled-stop-243584 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [a926595c5041] <==
	I0317 13:51:48.552741       1 aggregator.go:171] initial CRD sync complete...
	I0317 13:51:48.552835       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 13:51:48.552942       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 13:51:48.553080       1 cache.go:39] Caches are synced for autoregister controller
	I0317 13:51:48.588016       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0317 13:51:48.590943       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0317 13:51:48.590989       1 shared_informer.go:320] Caches are synced for configmaps
	I0317 13:51:48.591179       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0317 13:51:48.591413       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0317 13:51:48.591690       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 13:51:48.591752       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 13:51:48.591906       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0317 13:51:49.388409       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0317 13:51:49.396060       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0317 13:51:49.396088       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 13:51:50.088479       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 13:51:50.146905       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 13:51:50.211425       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0317 13:51:50.219335       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0317 13:51:50.220784       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 13:51:50.225603       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 13:51:50.517342       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 13:51:51.402324       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 13:51:51.422603       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 13:51:51.432923       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [a2e69c335190] <==
	I0317 13:51:55.085953       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 13:51:55.091032       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0317 13:51:55.101976       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 13:51:55.114774       1 shared_informer.go:320] Caches are synced for disruption
	I0317 13:51:55.114983       1 shared_informer.go:320] Caches are synced for HPA
	I0317 13:51:55.114918       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0317 13:51:55.114933       1 shared_informer.go:320] Caches are synced for taint
	I0317 13:51:55.115615       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0317 13:51:55.115810       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="scheduled-stop-243584"
	I0317 13:51:55.116437       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0317 13:51:55.115995       1 shared_informer.go:320] Caches are synced for expand
	I0317 13:51:55.116692       1 shared_informer.go:320] Caches are synced for persistent volume
	I0317 13:51:55.116088       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0317 13:51:55.122893       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0317 13:51:55.116208       1 shared_informer.go:320] Caches are synced for service account
	I0317 13:51:55.117070       1 shared_informer.go:320] Caches are synced for endpoint
	I0317 13:51:55.117083       1 shared_informer.go:320] Caches are synced for daemon sets
	I0317 13:51:55.126148       1 shared_informer.go:320] Caches are synced for cronjob
	I0317 13:51:55.126239       1 shared_informer.go:320] Caches are synced for job
	I0317 13:51:55.126281       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0317 13:51:55.126319       1 shared_informer.go:320] Caches are synced for crt configmap
	I0317 13:51:55.134566       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-243584" podCIDRs=["10.244.0.0/24"]
	I0317 13:51:55.134674       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-243584"
	I0317 13:51:55.134748       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-243584"
	I0317 13:51:55.149419       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-243584"
	
	
	==> kube-scheduler [3ad036ce2cfb] <==
	W0317 13:51:48.559503       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 13:51:48.561151       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:48.559554       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 13:51:48.561322       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:48.559614       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 13:51:48.561495       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:48.559685       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 13:51:48.561855       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:48.559721       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 13:51:48.562162       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:49.454386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0317 13:51:49.454439       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:49.491751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 13:51:49.491831       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:49.532936       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 13:51:49.533204       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:49.621174       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 13:51:49.621221       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:49.718322       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 13:51:49.718589       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 13:51:49.749468       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 13:51:49.749516       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:51:49.754023       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 13:51:49.754070       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0317 13:51:52.349519       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721812    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f99fcd21e2352b3194e2f689a2b0ab77-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-243584\" (UID: \"f99fcd21e2352b3194e2f689a2b0ab77\") " pod="kube-system/kube-apiserver-scheduled-stop-243584"
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721836    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/674948943cebe292401ee019a1dfbf65-ca-certs\") pod \"kube-controller-manager-scheduled-stop-243584\" (UID: \"674948943cebe292401ee019a1dfbf65\") " pod="kube-system/kube-controller-manager-scheduled-stop-243584"
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721855    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/674948943cebe292401ee019a1dfbf65-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-243584\" (UID: \"674948943cebe292401ee019a1dfbf65\") " pod="kube-system/kube-controller-manager-scheduled-stop-243584"
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721873    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ec2d1be6b05e32dd0ed64c110d469297-etcd-certs\") pod \"etcd-scheduled-stop-243584\" (UID: \"ec2d1be6b05e32dd0ed64c110d469297\") " pod="kube-system/etcd-scheduled-stop-243584"
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721892    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f99fcd21e2352b3194e2f689a2b0ab77-ca-certs\") pod \"kube-apiserver-scheduled-stop-243584\" (UID: \"f99fcd21e2352b3194e2f689a2b0ab77\") " pod="kube-system/kube-apiserver-scheduled-stop-243584"
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721910    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f99fcd21e2352b3194e2f689a2b0ab77-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-243584\" (UID: \"f99fcd21e2352b3194e2f689a2b0ab77\") " pod="kube-system/kube-apiserver-scheduled-stop-243584"
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721931    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f99fcd21e2352b3194e2f689a2b0ab77-k8s-certs\") pod \"kube-apiserver-scheduled-stop-243584\" (UID: \"f99fcd21e2352b3194e2f689a2b0ab77\") " pod="kube-system/kube-apiserver-scheduled-stop-243584"
	Mar 17 13:51:51 scheduled-stop-243584 kubelet[2457]: I0317 13:51:51.721956    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/674948943cebe292401ee019a1dfbf65-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-243584\" (UID: \"674948943cebe292401ee019a1dfbf65\") " pod="kube-system/kube-controller-manager-scheduled-stop-243584"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.319355    2457 apiserver.go:52] "Watching apiserver"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.403045    2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-243584"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.406326    2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-243584"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.406602    2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-243584"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: E0317 13:51:52.417971    2457 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-243584\" already exists" pod="kube-system/etcd-scheduled-stop-243584"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.420716    2457 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: E0317 13:51:52.421070    2457 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-243584\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-243584"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: E0317 13:51:52.421908    2457 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-243584\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-243584"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.457104    2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-243584" podStartSLOduration=3.457083252 podStartE2EDuration="3.457083252s" podCreationTimestamp="2025-03-17 13:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 13:51:52.443434714 +0000 UTC m=+1.253430860" watchObservedRunningTime="2025-03-17 13:51:52.457083252 +0000 UTC m=+1.267079390"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.457415    2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-243584" podStartSLOduration=1.457408075 podStartE2EDuration="1.457408075s" podCreationTimestamp="2025-03-17 13:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 13:51:52.454801572 +0000 UTC m=+1.264797710" watchObservedRunningTime="2025-03-17 13:51:52.457408075 +0000 UTC m=+1.267404213"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.481835    2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-243584" podStartSLOduration=2.481814604 podStartE2EDuration="2.481814604s" podCreationTimestamp="2025-03-17 13:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 13:51:52.469147395 +0000 UTC m=+1.279143541" watchObservedRunningTime="2025-03-17 13:51:52.481814604 +0000 UTC m=+1.291810742"
	Mar 17 13:51:52 scheduled-stop-243584 kubelet[2457]: I0317 13:51:52.494948    2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-243584" podStartSLOduration=1.4949299759999999 podStartE2EDuration="1.494929976s" podCreationTimestamp="2025-03-17 13:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 13:51:52.48239125 +0000 UTC m=+1.292387388" watchObservedRunningTime="2025-03-17 13:51:52.494929976 +0000 UTC m=+1.304926122"
	Mar 17 13:51:55 scheduled-stop-243584 kubelet[2457]: I0317 13:51:55.258939    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4ad6664f-e005-48c7-a2ad-08c65444f76e-tmp\") pod \"storage-provisioner\" (UID: \"4ad6664f-e005-48c7-a2ad-08c65444f76e\") " pod="kube-system/storage-provisioner"
	Mar 17 13:51:55 scheduled-stop-243584 kubelet[2457]: I0317 13:51:55.259003    2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xrk\" (UniqueName: \"kubernetes.io/projected/4ad6664f-e005-48c7-a2ad-08c65444f76e-kube-api-access-v9xrk\") pod \"storage-provisioner\" (UID: \"4ad6664f-e005-48c7-a2ad-08c65444f76e\") " pod="kube-system/storage-provisioner"
	Mar 17 13:51:55 scheduled-stop-243584 kubelet[2457]: E0317 13:51:55.371399    2457 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 17 13:51:55 scheduled-stop-243584 kubelet[2457]: E0317 13:51:55.371439    2457 projected.go:194] Error preparing data for projected volume kube-api-access-v9xrk for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 17 13:51:55 scheduled-stop-243584 kubelet[2457]: E0317 13:51:55.371514    2457 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4ad6664f-e005-48c7-a2ad-08c65444f76e-kube-api-access-v9xrk podName:4ad6664f-e005-48c7-a2ad-08c65444f76e nodeName:}" failed. No retries permitted until 2025-03-17 13:51:55.871490223 +0000 UTC m=+4.681486361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v9xrk" (UniqueName: "kubernetes.io/projected/4ad6664f-e005-48c7-a2ad-08c65444f76e-kube-api-access-v9xrk") pod "storage-provisioner" (UID: "4ad6664f-e005-48c7-a2ad-08c65444f76e") : configmap "kube-root-ca.crt" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-243584 -n scheduled-stop-243584
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-243584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-r49zz kube-proxy-94gxh storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-243584 describe pod coredns-668d6bf9bc-r49zz kube-proxy-94gxh storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-243584 describe pod coredns-668d6bf9bc-r49zz kube-proxy-94gxh storage-provisioner: exit status 1 (92.594352ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-r49zz" not found
	Error from server (NotFound): pods "kube-proxy-94gxh" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-243584 describe pod coredns-668d6bf9bc-r49zz kube-proxy-94gxh storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-243584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-243584
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-243584: (2.127815287s)
--- FAIL: TestScheduledStopUnix (36.97s)

                                                
                                    

Test pass (319/346)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.16
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.2/json-events 5.22
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.1
18 TestDownloadOnly/v1.32.2/DeleteAll 0.21
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
22 TestOffline 91.01
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 219.78
29 TestAddons/serial/Volcano 41.11
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.02
35 TestAddons/parallel/Registry 15.82
36 TestAddons/parallel/Ingress 20.18
37 TestAddons/parallel/InspektorGadget 11.73
38 TestAddons/parallel/MetricsServer 6.86
40 TestAddons/parallel/CSI 55.5
41 TestAddons/parallel/Headlamp 17.62
42 TestAddons/parallel/CloudSpanner 5.63
43 TestAddons/parallel/LocalPath 8.79
44 TestAddons/parallel/NvidiaDevicePlugin 5.99
45 TestAddons/parallel/Yakd 10.8
47 TestAddons/StoppedEnableDisable 11.24
48 TestCertOptions 49.38
49 TestCertExpiration 249.41
50 TestDockerFlags 46.45
51 TestForceSystemdFlag 42.11
52 TestForceSystemdEnv 42.98
58 TestErrorSpam/setup 31.9
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.05
61 TestErrorSpam/pause 1.4
62 TestErrorSpam/unpause 1.7
63 TestErrorSpam/stop 11.15
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 72.72
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 34.91
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
75 TestFunctional/serial/CacheCmd/cache/add_local 1.03
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 41.01
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.28
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 4.61
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 11.51
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.33
97 TestFunctional/parallel/ServiceCmdConnect 10.72
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 28.18
101 TestFunctional/parallel/SSHCmd 0.75
102 TestFunctional/parallel/CpCmd 2.27
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.25
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
113 TestFunctional/parallel/License 0.28
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ServiceCmd/List 0.58
128 TestFunctional/parallel/ProfileCmd/profile_list 0.49
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
132 TestFunctional/parallel/MountCmd/any-port 8.52
133 TestFunctional/parallel/ServiceCmd/Format 0.39
134 TestFunctional/parallel/ServiceCmd/URL 0.48
135 TestFunctional/parallel/MountCmd/specific-port 2.04
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.56
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.25
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.46
144 TestFunctional/parallel/ImageCommands/Setup 0.74
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.16
146 TestFunctional/parallel/DockerEnv/bash 1.25
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.28
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 123.57
164 TestMultiControlPlane/serial/DeployApp 42.77
165 TestMultiControlPlane/serial/PingHostFromPods 1.71
166 TestMultiControlPlane/serial/AddWorkerNode 26.04
167 TestMultiControlPlane/serial/NodeLabels 0.12
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
169 TestMultiControlPlane/serial/CopyFile 19.15
170 TestMultiControlPlane/serial/StopSecondaryNode 11.76
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
172 TestMultiControlPlane/serial/RestartSecondaryNode 40.41
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.19
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 183.17
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.2
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
177 TestMultiControlPlane/serial/StopCluster 32.9
178 TestMultiControlPlane/serial/RestartCluster 88.13
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
180 TestMultiControlPlane/serial/AddSecondaryNode 46.41
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
184 TestImageBuild/serial/Setup 33.07
185 TestImageBuild/serial/NormalBuild 1.77
186 TestImageBuild/serial/BuildWithBuildArg 1.03
187 TestImageBuild/serial/BuildWithDockerIgnore 0.9
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.77
192 TestJSONOutput/start/Command 76.34
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/pause/Command 0.57
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/unpause/Command 0.51
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 10.96
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.24
217 TestKicCustomNetwork/create_custom_network 33.79
218 TestKicCustomNetwork/use_default_bridge_network 30.23
219 TestKicExistingNetwork 32.75
220 TestKicCustomSubnet 33.54
221 TestKicStaticIP 34.09
222 TestMainNoArgs 0.05
223 TestMinikubeProfile 71.63
226 TestMountStart/serial/StartWithMountFirst 7.89
227 TestMountStart/serial/VerifyMountFirst 0.26
228 TestMountStart/serial/StartWithMountSecond 10.96
229 TestMountStart/serial/VerifyMountSecond 0.27
230 TestMountStart/serial/DeleteFirst 1.48
231 TestMountStart/serial/VerifyMountPostDelete 0.26
232 TestMountStart/serial/Stop 1.2
233 TestMountStart/serial/RestartStopped 8.64
234 TestMountStart/serial/VerifyMountPostStop 0.26
237 TestMultiNode/serial/FreshStart2Nodes 74.62
238 TestMultiNode/serial/DeployApp2Nodes 48.48
239 TestMultiNode/serial/PingHostFrom2Pods 1.05
240 TestMultiNode/serial/AddNode 18.65
241 TestMultiNode/serial/MultiNodeLabels 0.1
242 TestMultiNode/serial/ProfileList 0.69
243 TestMultiNode/serial/CopyFile 10.12
244 TestMultiNode/serial/StopNode 2.28
245 TestMultiNode/serial/StartAfterStop 10.7
246 TestMultiNode/serial/RestartKeepsNodes 82.61
247 TestMultiNode/serial/DeleteNode 5.35
248 TestMultiNode/serial/StopMultiNode 21.52
249 TestMultiNode/serial/RestartMultiNode 63.74
250 TestMultiNode/serial/ValidateNameConflict 36.42
255 TestPreload 105.5
258 TestSkaffold 118.45
260 TestInsufficientStorage 10.76
261 TestRunningBinaryUpgrade 92.6
263 TestKubernetesUpgrade 383.37
264 TestMissingContainerUpgrade 164.55
266 TestPause/serial/Start 51.65
267 TestPause/serial/SecondStartNoReconfiguration 36.13
268 TestPause/serial/Pause 0.67
269 TestPause/serial/VerifyStatus 0.45
270 TestPause/serial/Unpause 0.65
271 TestPause/serial/PauseAgain 0.93
272 TestPause/serial/DeletePaused 2.36
273 TestPause/serial/VerifyDeletedResources 0.17
274 TestStoppedBinaryUpgrade/Setup 0.89
275 TestStoppedBinaryUpgrade/Upgrade 83.68
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
285 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
286 TestNoKubernetes/serial/StartWithK8s 39.75
298 TestNoKubernetes/serial/StartWithStopK8s 21.02
299 TestNoKubernetes/serial/Start 10.27
300 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
301 TestNoKubernetes/serial/ProfileList 1.18
302 TestNoKubernetes/serial/Stop 1.26
303 TestNoKubernetes/serial/StartNoArgs 8.81
304 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
306 TestStartStop/group/old-k8s-version/serial/FirstStart 153.01
307 TestStartStop/group/old-k8s-version/serial/DeployApp 10.55
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
309 TestStartStop/group/old-k8s-version/serial/Stop 11.13
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/old-k8s-version/serial/SecondStart 122.72
313 TestStartStop/group/no-preload/serial/FirstStart 60.81
314 TestStartStop/group/no-preload/serial/DeployApp 9.39
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
316 TestStartStop/group/no-preload/serial/Stop 10.83
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
318 TestStartStop/group/no-preload/serial/SecondStart 266.55
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
321 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
322 TestStartStop/group/old-k8s-version/serial/Pause 4.28
324 TestStartStop/group/embed-certs/serial/FirstStart 73.26
325 TestStartStop/group/embed-certs/serial/DeployApp 9.35
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
327 TestStartStop/group/embed-certs/serial/Stop 10.95
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
329 TestStartStop/group/embed-certs/serial/SecondStart 274.03
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
333 TestStartStop/group/no-preload/serial/Pause 2.87
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.08
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.01
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.27
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
344 TestStartStop/group/embed-certs/serial/Pause 3.2
346 TestStartStop/group/newest-cni/serial/FirstStart 39.62
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
349 TestStartStop/group/newest-cni/serial/Stop 5.88
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
351 TestStartStop/group/newest-cni/serial/SecondStart 18.37
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
355 TestStartStop/group/newest-cni/serial/Pause 3.5
356 TestNetworkPlugins/group/auto/Start 46.56
357 TestNetworkPlugins/group/auto/KubeletFlags 0.3
358 TestNetworkPlugins/group/auto/NetCatPod 11.27
359 TestNetworkPlugins/group/auto/DNS 0.18
360 TestNetworkPlugins/group/auto/Localhost 0.18
361 TestNetworkPlugins/group/auto/HairPin 0.17
362 TestNetworkPlugins/group/kindnet/Start 68.65
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
365 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
367 TestNetworkPlugins/group/kindnet/DNS 0.19
368 TestNetworkPlugins/group/kindnet/Localhost 0.18
369 TestNetworkPlugins/group/kindnet/HairPin 0.16
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
372 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.97
373 TestNetworkPlugins/group/calico/Start 90.47
374 TestNetworkPlugins/group/custom-flannel/Start 65.86
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.45
377 TestNetworkPlugins/group/custom-flannel/DNS 0.23
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
380 TestNetworkPlugins/group/calico/ControllerPod 6.01
381 TestNetworkPlugins/group/calico/KubeletFlags 0.37
382 TestNetworkPlugins/group/calico/NetCatPod 11.43
383 TestNetworkPlugins/group/calico/DNS 0.3
384 TestNetworkPlugins/group/calico/Localhost 0.34
385 TestNetworkPlugins/group/calico/HairPin 0.24
386 TestNetworkPlugins/group/false/Start 76.88
387 TestNetworkPlugins/group/enable-default-cni/Start 75.79
388 TestNetworkPlugins/group/false/KubeletFlags 0.31
389 TestNetworkPlugins/group/false/NetCatPod 9.28
390 TestNetworkPlugins/group/false/DNS 0.19
391 TestNetworkPlugins/group/false/Localhost 0.16
392 TestNetworkPlugins/group/false/HairPin 0.18
393 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
394 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
395 TestNetworkPlugins/group/flannel/Start 59.13
396 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
397 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
398 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
399 TestNetworkPlugins/group/bridge/Start 79.08
400 TestNetworkPlugins/group/flannel/ControllerPod 6.01
401 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
402 TestNetworkPlugins/group/flannel/NetCatPod 11.36
403 TestNetworkPlugins/group/flannel/DNS 0.21
404 TestNetworkPlugins/group/flannel/Localhost 0.16
405 TestNetworkPlugins/group/flannel/HairPin 0.19
406 TestNetworkPlugins/group/kubenet/Start 77.85
407 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
408 TestNetworkPlugins/group/bridge/NetCatPod 12.32
409 TestNetworkPlugins/group/bridge/DNS 0.24
410 TestNetworkPlugins/group/bridge/Localhost 0.19
411 TestNetworkPlugins/group/bridge/HairPin 0.22
412 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
413 TestNetworkPlugins/group/kubenet/NetCatPod 11.28
414 TestNetworkPlugins/group/kubenet/DNS 0.17
415 TestNetworkPlugins/group/kubenet/Localhost 0.17
416 TestNetworkPlugins/group/kubenet/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (6.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-170417 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-170417 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.160403829s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0317 13:15:01.239257 1120731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0317 13:15:01.239346 1120731 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-170417
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-170417: exit status 85 (96.968319ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-170417 | jenkins | v1.35.0 | 17 Mar 25 13:14 UTC |          |
	|         | -p download-only-170417        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:14:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:14:55.127113 1120737 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:14:55.127600 1120737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:14:55.127612 1120737 out.go:358] Setting ErrFile to fd 2...
	I0317 13:14:55.127620 1120737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:14:55.128433 1120737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	W0317 13:14:55.128683 1120737 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20539-1115410/.minikube/config/config.json: open /home/jenkins/minikube-integration/20539-1115410/.minikube/config/config.json: no such file or directory
	I0317 13:14:55.129216 1120737 out.go:352] Setting JSON to true
	I0317 13:14:55.130131 1120737 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":32246,"bootTime":1742185049,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0317 13:14:55.130313 1120737 start.go:139] virtualization:  
	I0317 13:14:55.134500 1120737 out.go:97] [download-only-170417] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0317 13:14:55.134643 1120737 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball: no such file or directory
	I0317 13:14:55.134680 1120737 notify.go:220] Checking for updates...
	I0317 13:14:55.137640 1120737 out.go:169] MINIKUBE_LOCATION=20539
	I0317 13:14:55.141003 1120737 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:14:55.144023 1120737 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	I0317 13:14:55.146961 1120737 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	I0317 13:14:55.150012 1120737 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0317 13:14:55.155947 1120737 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 13:14:55.156262 1120737 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:14:55.178290 1120737 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:14:55.178389 1120737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:14:55.234543 1120737 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-17 13:14:55.225087774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:14:55.234649 1120737 docker.go:318] overlay module found
	I0317 13:14:55.237555 1120737 out.go:97] Using the docker driver based on user configuration
	I0317 13:14:55.237595 1120737 start.go:297] selected driver: docker
	I0317 13:14:55.237602 1120737 start.go:901] validating driver "docker" against <nil>
	I0317 13:14:55.237711 1120737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:14:55.300584 1120737 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-17 13:14:55.292219312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:14:55.300730 1120737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:14:55.301023 1120737 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0317 13:14:55.301196 1120737 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 13:14:55.304374 1120737 out.go:169] Using Docker driver with root privileges
	I0317 13:14:55.307084 1120737 cni.go:84] Creating CNI manager for ""
	I0317 13:14:55.307159 1120737 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0317 13:14:55.307229 1120737 start.go:340] cluster config:
	{Name:download-only-170417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-170417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:14:55.310221 1120737 out.go:97] Starting "download-only-170417" primary control-plane node in "download-only-170417" cluster
	I0317 13:14:55.310263 1120737 cache.go:121] Beginning downloading kic base image for docker with docker
	I0317 13:14:55.313143 1120737 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0317 13:14:55.313176 1120737 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0317 13:14:55.313286 1120737 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 13:14:55.328934 1120737 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 13:14:55.329842 1120737 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 13:14:55.329948 1120737 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 13:14:55.376559 1120737 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0317 13:14:55.376590 1120737 cache.go:56] Caching tarball of preloaded images
	I0317 13:14:55.377320 1120737 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0317 13:14:55.380566 1120737 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0317 13:14:55.380596 1120737 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0317 13:14:55.471216 1120737 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0317 13:14:59.595115 1120737 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0317 13:14:59.595208 1120737 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0317 13:14:59.762804 1120737 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	
	
	* The control-plane node download-only-170417 host does not exist
	  To start a cluster, run: "minikube start -p download-only-170417"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-170417
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-131079 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-131079 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.215874355s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0317 13:15:06.932335 1120731 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0317 13:15:06.932375 1120731 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-131079
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-131079: exit status 85 (95.237466ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-170417 | jenkins | v1.35.0 | 17 Mar 25 13:14 UTC |                     |
	|         | -p download-only-170417        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 17 Mar 25 13:15 UTC | 17 Mar 25 13:15 UTC |
	| delete  | -p download-only-170417        | download-only-170417 | jenkins | v1.35.0 | 17 Mar 25 13:15 UTC | 17 Mar 25 13:15 UTC |
	| start   | -o=json --download-only        | download-only-131079 | jenkins | v1.35.0 | 17 Mar 25 13:15 UTC |                     |
	|         | -p download-only-131079        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:15:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:15:01.764907 1120938 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:15:01.765110 1120938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:01.765142 1120938 out.go:358] Setting ErrFile to fd 2...
	I0317 13:15:01.765165 1120938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:01.766088 1120938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:15:01.766624 1120938 out.go:352] Setting JSON to true
	I0317 13:15:01.767512 1120938 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":32253,"bootTime":1742185049,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0317 13:15:01.767623 1120938 start.go:139] virtualization:  
	I0317 13:15:01.771109 1120938 out.go:97] [download-only-131079] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0317 13:15:01.771578 1120938 notify.go:220] Checking for updates...
	I0317 13:15:01.774313 1120938 out.go:169] MINIKUBE_LOCATION=20539
	I0317 13:15:01.777687 1120938 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:15:01.780638 1120938 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	I0317 13:15:01.783756 1120938 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	I0317 13:15:01.786827 1120938 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0317 13:15:01.792681 1120938 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 13:15:01.792957 1120938 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:15:01.825001 1120938 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:15:01.825103 1120938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:15:01.884778 1120938 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-03-17 13:15:01.875164164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:15:01.884896 1120938 docker.go:318] overlay module found
	I0317 13:15:01.888068 1120938 out.go:97] Using the docker driver based on user configuration
	I0317 13:15:01.888107 1120938 start.go:297] selected driver: docker
	I0317 13:15:01.888120 1120938 start.go:901] validating driver "docker" against <nil>
	I0317 13:15:01.888242 1120938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:15:01.947083 1120938 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-03-17 13:15:01.937649789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:15:01.947261 1120938 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:15:01.947548 1120938 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0317 13:15:01.947700 1120938 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 13:15:01.950928 1120938 out.go:169] Using Docker driver with root privileges
	I0317 13:15:01.953801 1120938 cni.go:84] Creating CNI manager for ""
	I0317 13:15:01.953888 1120938 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:15:01.953908 1120938 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:15:01.953993 1120938 start.go:340] cluster config:
	{Name:download-only-131079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-131079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:15:01.956892 1120938 out.go:97] Starting "download-only-131079" primary control-plane node in "download-only-131079" cluster
	I0317 13:15:01.956925 1120938 cache.go:121] Beginning downloading kic base image for docker with docker
	I0317 13:15:01.959690 1120938 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0317 13:15:01.959762 1120938 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:15:01.959863 1120938 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 13:15:01.977978 1120938 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 13:15:01.978117 1120938 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 13:15:01.978137 1120938 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0317 13:15:01.978146 1120938 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0317 13:15:01.978154 1120938 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0317 13:15:02.020215 1120938 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
	I0317 13:15:02.020243 1120938 cache.go:56] Caching tarball of preloaded images
	I0317 13:15:02.020450 1120938 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:15:02.023747 1120938 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0317 13:15:02.023806 1120938 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 ...
	I0317 13:15:02.112369 1120938 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4?checksum=md5:0f214d8e9732f3a450da0811727c623c -> /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
	I0317 13:15:05.548796 1120938 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 ...
	I0317 13:15:05.548901 1120938 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 ...
	I0317 13:15:06.325999 1120938 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 13:15:06.326358 1120938 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/download-only-131079/config.json ...
	I0317 13:15:06.326393 1120938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/download-only-131079/config.json: {Name:mkee56267e966b1fbeaa620ad80a6c8ff3ec1926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:15:06.326582 1120938 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:15:06.326735 1120938 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20539-1115410/.minikube/cache/linux/arm64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-131079 host does not exist
	  To start a cluster, run: "minikube start -p download-only-131079"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-131079
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0317 13:15:08.230016 1120731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-651197 --alsologtostderr --binary-mirror http://127.0.0.1:40285 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-651197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-651197
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (91.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-955616 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-955616 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m28.480437279s)
helpers_test.go:175: Cleaning up "offline-docker-955616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-955616
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-955616: (2.531464042s)
--- PASS: TestOffline (91.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-464596
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-464596: exit status 85 (73.830213ms)

                                                
                                                
-- stdout --
	* Profile "addons-464596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-464596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-464596
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-464596: exit status 85 (78.606308ms)

                                                
                                                
-- stdout --
	* Profile "addons-464596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-464596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (219.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-464596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-464596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m39.779273838s)
--- PASS: TestAddons/Setup (219.78s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.11s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 62.427846ms
addons_test.go:823: volcano-controller stabilized in 62.757872ms
addons_test.go:807: volcano-scheduler stabilized in 63.285157ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-zdxpc" [26fa0492-fc20-420e-827c-cf650fcd5b67] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.002951126s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-kfms2" [777d6bbc-a071-44df-a23b-f4547f4e5a71] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003112796s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-7wgz5" [bc6143c6-06d7-49cf-99bc-a32d144751b3] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003455309s
addons_test.go:842: (dbg) Run:  kubectl --context addons-464596 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-464596 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-464596 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [146aa00c-9ec5-4651-b34c-0c77de04c7fa] Pending
helpers_test.go:344: "test-job-nginx-0" [146aa00c-9ec5-4651-b34c-0c77de04c7fa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [146aa00c-9ec5-4651-b34c-0c77de04c7fa] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003395369s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-464596 addons disable volcano --alsologtostderr -v=1: (11.405945264s)
--- PASS: TestAddons/serial/Volcano (41.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-464596 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-464596 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.02s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-464596 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-464596 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [26f5742e-ca47-4ebf-80d9-d7abd681da90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [26f5742e-ca47-4ebf-80d9-d7abd681da90] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00334499s
addons_test.go:633: (dbg) Run:  kubectl --context addons-464596 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-464596 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-464596 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-464596 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.02s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.91524ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-5g8m7" [42e70be1-8400-433b-94c6-306adbba8fab] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00355209s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pbq55" [d7b09797-74f7-4fd5-a867-f8a3773ee614] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002874455s
addons_test.go:331: (dbg) Run:  kubectl --context addons-464596 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-464596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-464596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.955993541s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 ip
2025/03/17 13:20:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-464596 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-464596 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-464596 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1f4911f5-0a6f-4376-b2d3-c3565958a869] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1f4911f5-0a6f-4376-b2d3-c3565958a869] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003685501s
I0317 13:20:57.298628 1120731 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-464596 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-464596 addons disable ingress-dns --alsologtostderr -v=1: (1.467673209s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-464596 addons disable ingress --alsologtostderr -v=1: (7.950955403s)
--- PASS: TestAddons/parallel/Ingress (20.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gsfmp" [9a3f316d-5bed-4073-99db-45e69c8c4664] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002921487s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-464596 addons disable inspektor-gadget --alsologtostderr -v=1: (5.727691895s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.269112ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-vtlwg" [87c02c2e-f9e2-4ec6-ae08-8d20dfb07575] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00299306s
addons_test.go:402: (dbg) Run:  kubectl --context addons-464596 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0317 13:20:13.604825 1120731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0317 13:20:13.610918 1120731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0317 13:20:13.610943 1120731 kapi.go:107] duration metric: took 9.001774ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.012219ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-464596 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-464596 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f6ada5b9-b460-4f35-9bcc-13a483d35a68] Pending
helpers_test.go:344: "task-pv-pod" [f6ada5b9-b460-4f35-9bcc-13a483d35a68] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f6ada5b9-b460-4f35-9bcc-13a483d35a68] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004065628s
addons_test.go:511: (dbg) Run:  kubectl --context addons-464596 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-464596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-464596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-464596 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-464596 delete pod task-pv-pod: (1.593963905s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-464596 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-464596 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-464596 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c9ab7bfe-9498-4dee-8e60-ea17e798de88] Pending
helpers_test.go:344: "task-pv-pod-restore" [c9ab7bfe-9498-4dee-8e60-ea17e798de88] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c9ab7bfe-9498-4dee-8e60-ea17e798de88] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003412567s
addons_test.go:553: (dbg) Run:  kubectl --context addons-464596 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-464596 delete pod task-pv-pod-restore: (1.240646788s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-464596 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-464596 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-464596 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.88945395s)
--- PASS: TestAddons/parallel/CSI (55.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-464596 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-kjn25" [e31f80e1-b79b-43bc-92f7-3d7a9d60c1bd] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-kjn25" [e31f80e1-b79b-43bc-92f7-3d7a9d60c1bd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-kjn25" [e31f80e1-b79b-43bc-92f7-3d7a9d60c1bd] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-kjn25" [e31f80e1-b79b-43bc-92f7-3d7a9d60c1bd] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003725392s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-464596 addons disable headlamp --alsologtostderr -v=1: (5.670955745s)
--- PASS: TestAddons/parallel/Headlamp (17.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-422rt" [97622bc9-7197-4ca9-b58c-f5853322d09e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004895031s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-464596 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-464596 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-464596 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dd6fc0e2-bc08-49be-855f-49c8d471aa12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dd6fc0e2-bc08-49be-855f-49c8d471aa12] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dd6fc0e2-bc08-49be-855f-49c8d471aa12] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003270447s
addons_test.go:906: (dbg) Run:  kubectl --context addons-464596 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 ssh "cat /opt/local-path-provisioner/pvc-a8dff1e6-499a-4e63-b056-bfd60383b48a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-464596 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-464596 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-42tz9" [7f527006-a638-4e6e-8f0a-ff42c066e4b6] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003634614s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.99s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-9sz4q" [bf659229-1bd7-4ca2-9bba-4a60c9d36670] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003795552s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-464596 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-464596 addons disable yakd --alsologtostderr -v=1: (5.792030723s)
--- PASS: TestAddons/parallel/Yakd (10.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-464596
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-464596: (10.968062224s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-464596
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-464596
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-464596
--- PASS: TestAddons/StoppedEnableDisable (11.24s)

                                                
                                    
x
+
TestCertOptions (49.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-737053 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-737053 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (46.10889748s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-737053 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-737053 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-737053 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-737053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-737053
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-737053: (2.25686878s)
--- PASS: TestCertOptions (49.38s)

                                                
                                    
x
+
TestCertExpiration (249.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-040043 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0317 14:03:42.903257 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:48.732282 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:10.605483 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-040043 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (42.757949366s)
E0317 14:04:58.219780 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-040043 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-040043 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.357017475s)
helpers_test.go:175: Cleaning up "cert-expiration-040043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-040043
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-040043: (2.289461395s)
--- PASS: TestCertExpiration (249.41s)

                                                
                                    
x
+
TestDockerFlags (46.45s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-565361 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0317 14:01:26.762669 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-565361 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.828279843s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-565361 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-565361 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-565361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-565361
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-565361: (3.820633197s)
--- PASS: TestDockerFlags (46.45s)

                                                
                                    
x
+
TestForceSystemdFlag (42.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-387767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0317 14:03:01.292831 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-387767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.395760717s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-387767 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-387767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-387767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-387767: (2.36630541s)
--- PASS: TestForceSystemdFlag (42.11s)

                                                
                                    
x
+
TestForceSystemdEnv (42.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-604967 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-604967 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.409184345s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-604967 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-604967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-604967
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-604967: (2.240046423s)
--- PASS: TestForceSystemdEnv (42.98s)

                                                
                                    
x
+
TestErrorSpam/setup (31.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-547246 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-547246 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-547246 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-547246 --driver=docker  --container-runtime=docker: (31.904225931s)
--- PASS: TestErrorSpam/setup (31.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 pause
--- PASS: TestErrorSpam/pause (1.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (11.15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 stop: (10.944623338s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-547246 --log_dir /tmp/nospam-547246 stop
--- PASS: TestErrorSpam/stop (11.15s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20539-1115410/.minikube/files/etc/test/nested/copy/1120731/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (72.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-027308 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-027308 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m12.719759328s)
--- PASS: TestFunctional/serial/StartWithProxy (72.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0317 13:23:26.220572 1120731 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-027308 --alsologtostderr -v=8
E0317 13:23:48.734622 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:48.741724 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:48.753093 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:48.774427 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:48.815896 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:48.897304 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:49.059305 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:49.380982 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:50.023320 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:51.305044 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:53.866741 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:23:58.988054 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-027308 --alsologtostderr -v=8: (34.908986094s)
functional_test.go:680: soft start took 34.911926836s for "functional-027308" cluster.
I0317 13:24:01.129973 1120731 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (34.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-027308 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-027308 cache add registry.k8s.io/pause:3.1: (1.160739406s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-027308 cache add registry.k8s.io/pause:3.3: (1.170267659s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-027308 /tmp/TestFunctionalserialCacheCmdcacheadd_local594332095/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cache add minikube-local-cache-test:functional-027308
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cache delete minikube-local-cache-test:functional-027308
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-027308
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.779218ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 kubectl -- --context functional-027308 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-027308 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-027308 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0317 13:24:09.230266 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:24:29.711898 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-027308 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.013878648s)
functional_test.go:778: restart took 41.0139702s for "functional-027308" cluster.
I0317 13:24:49.109510 1120731 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (41.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-027308 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-027308 logs: (1.284356715s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 logs --file /tmp/TestFunctionalserialLogsFileCmd3277364173/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-027308 logs --file /tmp/TestFunctionalserialLogsFileCmd3277364173/001/logs.txt: (1.247894893s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-027308 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-027308
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-027308: exit status 115 (707.471434ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30903 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-027308 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 config get cpus: exit status 14 (93.168251ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 config get cpus: exit status 14 (71.789255ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-027308 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-027308 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1161200: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-027308 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-027308 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (211.417901ms)

                                                
                                                
-- stdout --
	* [functional-027308] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:25:31.215256 1160866 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:25:31.215683 1160866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:25:31.215718 1160866 out.go:358] Setting ErrFile to fd 2...
	I0317 13:25:31.215743 1160866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:25:31.216071 1160866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:25:31.216447 1160866 out.go:352] Setting JSON to false
	I0317 13:25:31.217494 1160866 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":32882,"bootTime":1742185049,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0317 13:25:31.217589 1160866 start.go:139] virtualization:  
	I0317 13:25:31.220364 1160866 out.go:177] * [functional-027308] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0317 13:25:31.223721 1160866 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:25:31.223859 1160866 notify.go:220] Checking for updates...
	I0317 13:25:31.228956 1160866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:25:31.231750 1160866 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	I0317 13:25:31.234485 1160866 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	I0317 13:25:31.237165 1160866 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0317 13:25:31.240378 1160866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:25:31.244059 1160866 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:25:31.244616 1160866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:25:31.281059 1160866 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:25:31.281224 1160866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:25:31.358472 1160866 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-17 13:25:31.349671432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:25:31.358576 1160866 docker.go:318] overlay module found
	I0317 13:25:31.361634 1160866 out.go:177] * Using the docker driver based on existing profile
	I0317 13:25:31.364335 1160866 start.go:297] selected driver: docker
	I0317 13:25:31.364354 1160866 start.go:901] validating driver "docker" against &{Name:functional-027308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-027308 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:25:31.364444 1160866 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:25:31.367902 1160866 out.go:201] 
	W0317 13:25:31.370864 1160866 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0317 13:25:31.373698 1160866 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-027308 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-027308 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-027308 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (226.499698ms)

                                                
                                                
-- stdout --
	* [functional-027308] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:25:31.012535 1160768 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:25:31.012992 1160768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:25:31.013031 1160768 out.go:358] Setting ErrFile to fd 2...
	I0317 13:25:31.013054 1160768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:25:31.013471 1160768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:25:31.013908 1160768 out.go:352] Setting JSON to false
	I0317 13:25:31.014991 1160768 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":32882,"bootTime":1742185049,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0317 13:25:31.015104 1160768 start.go:139] virtualization:  
	I0317 13:25:31.021618 1160768 out.go:177] * [functional-027308] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0317 13:25:31.024900 1160768 notify.go:220] Checking for updates...
	I0317 13:25:31.027930 1160768 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:25:31.030894 1160768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:25:31.033963 1160768 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	I0317 13:25:31.036809 1160768 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	I0317 13:25:31.039828 1160768 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0317 13:25:31.042729 1160768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:25:31.046123 1160768 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:25:31.046686 1160768 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:25:31.080336 1160768 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:25:31.080463 1160768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:25:31.144103 1160768 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-17 13:25:31.133794124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:25:31.144239 1160768 docker.go:318] overlay module found
	I0317 13:25:31.147584 1160768 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0317 13:25:31.150498 1160768 start.go:297] selected driver: docker
	I0317 13:25:31.150522 1160768 start.go:901] validating driver "docker" against &{Name:functional-027308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-027308 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:25:31.150630 1160768 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:25:31.154302 1160768 out.go:201] 
	W0317 13:25:31.157164 1160768 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0317 13:25:31.161653 1160768 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-027308 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-027308 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-hj2vt" [e0f85e90-5c06-4863-9715-02240df92dc3] Pending
E0317 13:25:10.673225 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-8449669db6-hj2vt" [e0f85e90-5c06-4863-9715-02240df92dc3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-hj2vt" [e0f85e90-5c06-4863-9715-02240df92dc3] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003449094s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32597
functional_test.go:1692: http://192.168.49.2:32597: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-hj2vt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32597
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8de37f21-7954-4be0-9228-7e67a7a770af] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003572835s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-027308 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-027308 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-027308 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-027308 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7f89d5d1-e6cb-4ecd-a8da-5d0cf9d86f61] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7f89d5d1-e6cb-4ecd-a8da-5d0cf9d86f61] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003301499s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-027308 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-027308 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-027308 delete -f testdata/storage-provisioner/pod.yaml: (1.181541932s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-027308 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [69022fb0-15b7-4ce1-ba28-8a4e55ddd9f8] Pending
helpers_test.go:344: "sp-pod" [69022fb0-15b7-4ce1-ba28-8a4e55ddd9f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [69022fb0-15b7-4ce1-ba28-8a4e55ddd9f8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003683176s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-027308 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh -n functional-027308 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cp functional-027308:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1892853073/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh -n functional-027308 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh -n functional-027308 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1120731/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo cat /etc/test/nested/copy/1120731/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1120731.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo cat /etc/ssl/certs/1120731.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1120731.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo cat /usr/share/ca-certificates/1120731.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/11207312.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo cat /etc/ssl/certs/11207312.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/11207312.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo cat /usr/share/ca-certificates/11207312.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-027308 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 ssh "sudo systemctl is-active crio": exit status 1 (378.042793ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-027308 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-027308 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-027308 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1158194: os: process already finished
helpers_test.go:502: unable to terminate pid 1158008: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-027308 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-027308 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-027308 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e71dd300-c2ee-417b-a8a5-f1aa509acd62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e71dd300-c2ee-417b-a8a5-f1aa509acd62] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003285797s
I0317 13:25:08.676708 1120731 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-027308 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.241.96 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-027308 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-027308 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-027308 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-4rrn8" [693d448b-b0fc-4f2f-889d-084b74f02cce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-4rrn8" [693d448b-b0fc-4f2f-889d-084b74f02cce] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003009494s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "408.538975ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "81.678047ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 service list -o json
functional_test.go:1511: Took "578.090597ms" to run "out/minikube-linux-arm64 -p functional-027308 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "406.390984ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "70.907901ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:32183
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdany-port2420008616/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1742217928230894141" to /tmp/TestFunctionalparallelMountCmdany-port2420008616/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1742217928230894141" to /tmp/TestFunctionalparallelMountCmdany-port2420008616/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1742217928230894141" to /tmp/TestFunctionalparallelMountCmdany-port2420008616/001/test-1742217928230894141
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (475.526013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 13:25:28.707837 1120731 retry.go:31] will retry after 379.670402ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 17 13:25 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 17 13:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 17 13:25 test-1742217928230894141
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh cat /mount-9p/test-1742217928230894141
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-027308 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ca4d8652-3faf-47d3-b0e8-b6924b4f3bbf] Pending
helpers_test.go:344: "busybox-mount" [ca4d8652-3faf-47d3-b0e8-b6924b4f3bbf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ca4d8652-3faf-47d3-b0e8-b6924b4f3bbf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ca4d8652-3faf-47d3-b0e8-b6924b4f3bbf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003408763s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-027308 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdany-port2420008616/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:32183
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdspecific-port12126251/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (516.897007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 13:25:37.264688 1120731 retry.go:31] will retry after 289.321334ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdspecific-port12126251/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 ssh "sudo umount -f /mount-9p": exit status 1 (364.525468ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-027308 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdspecific-port12126251/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4113130971/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4113130971/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4113130971/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T" /mount1: exit status 1 (1.058998232s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 13:25:39.850062 1120731 retry.go:31] will retry after 436.725582ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-027308 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4113130971/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4113130971/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-027308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4113130971/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-027308 version -o=json --components: (1.249395245s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-027308 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-027308
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-027308
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-027308 image ls --format short --alsologtostderr:
I0317 13:25:48.773118 1164033 out.go:345] Setting OutFile to fd 1 ...
I0317 13:25:48.773296 1164033 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:48.773307 1164033 out.go:358] Setting ErrFile to fd 2...
I0317 13:25:48.773313 1164033 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:48.773593 1164033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
I0317 13:25:48.774381 1164033 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:48.774515 1164033 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:48.774959 1164033 cli_runner.go:164] Run: docker container inspect functional-027308 --format={{.State.Status}}
I0317 13:25:48.803218 1164033 ssh_runner.go:195] Run: systemctl --version
I0317 13:25:48.803283 1164033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-027308
I0317 13:25:48.829642 1164033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/functional-027308/id_rsa Username:docker}
I0317 13:25:48.920779 1164033 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
W0317 13:25:48.955654 1164033 root.go:91] failed to log command end to audit: failed to find a log row with id equals to a7575036-ed64-4fd2-b1fc-19c278e73f51
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-027308 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.32.2           | 82dfa03f692fb | 67.9MB |
| docker.io/library/nginx                     | alpine            | cedb667e1a7b4 | 49.4MB |
| docker.io/library/nginx                     | latest            | 678546cdd20cd | 197MB  |
| registry.k8s.io/etcd                        | 3.5.16-0          | 7fc9d4aa817aa | 142MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-027308 | f7d24a1831da1 | 30B    |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.32.2           | 6417e1437b6d9 | 93.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-027308 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | 3c9285acfd2ff | 87.2MB |
| registry.k8s.io/kube-proxy                  | v1.32.2           | e5aac5df76d9b | 97.1MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-027308 image ls --format table --alsologtostderr:
I0317 13:25:49.527465 1164269 out.go:345] Setting OutFile to fd 1 ...
I0317 13:25:49.527571 1164269 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.527576 1164269 out.go:358] Setting ErrFile to fd 2...
I0317 13:25:49.527580 1164269 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.528226 1164269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
I0317 13:25:49.531766 1164269 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.540034 1164269 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.540641 1164269 cli_runner.go:164] Run: docker container inspect functional-027308 --format={{.State.Status}}
I0317 13:25:49.563740 1164269 ssh_runner.go:195] Run: systemctl --version
I0317 13:25:49.563829 1164269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-027308
I0317 13:25:49.582943 1164269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/functional-027308/id_rsa Username:docker}
I0317 13:25:49.672640 1164269 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-027308 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"97100000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"f7d24a1831da1512d79ebce3de0f4842aaae83252a3bcb1415970d100b9f0188","repoDigests
":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-027308"],"size":"30"},{"id":"678546cdd20cd5baaea6f534dbb7482fc9f2f8d24c1f3c53c0e747b699b849da","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32","repoDigests":[],"repoTags":["
registry.k8s.io/kube-apiserver:v1.32.2"],"size":"93900000"},{"id":"82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"67900000"},{"id":"3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"87200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-027308"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"49400000"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"siz
e":"142000000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-027308 image ls --format json --alsologtostderr:
I0317 13:25:49.289477 1164196 out.go:345] Setting OutFile to fd 1 ...
I0317 13:25:49.289662 1164196 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.289674 1164196 out.go:358] Setting ErrFile to fd 2...
I0317 13:25:49.289688 1164196 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.289957 1164196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
I0317 13:25:49.290600 1164196 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.290754 1164196 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.291285 1164196 cli_runner.go:164] Run: docker container inspect functional-027308 --format={{.State.Status}}
I0317 13:25:49.309418 1164196 ssh_runner.go:195] Run: systemctl --version
I0317 13:25:49.309477 1164196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-027308
I0317 13:25:49.336294 1164196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/functional-027308/id_rsa Username:docker}
I0317 13:25:49.431660 1164196 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-027308 image ls --format yaml --alsologtostderr:
- id: e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "97100000"
- id: 678546cdd20cd5baaea6f534dbb7482fc9f2f8d24c1f3c53c0e747b699b849da
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "93900000"
- id: 82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "67900000"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "142000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-027308
size: "4780000"
- id: f7d24a1831da1512d79ebce3de0f4842aaae83252a3bcb1415970d100b9f0188
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-027308
size: "30"
- id: cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "49400000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "87200000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-027308 image ls --format yaml --alsologtostderr:
I0317 13:25:49.038068 1164131 out.go:345] Setting OutFile to fd 1 ...
I0317 13:25:49.039282 1164131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.039332 1164131 out.go:358] Setting ErrFile to fd 2...
I0317 13:25:49.039697 1164131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.040823 1164131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
I0317 13:25:49.042894 1164131 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.043368 1164131 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.044687 1164131 cli_runner.go:164] Run: docker container inspect functional-027308 --format={{.State.Status}}
I0317 13:25:49.067980 1164131 ssh_runner.go:195] Run: systemctl --version
I0317 13:25:49.068050 1164131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-027308
I0317 13:25:49.087045 1164131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/functional-027308/id_rsa Username:docker}
I0317 13:25:49.181502 1164131 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-027308 ssh pgrep buildkitd: exit status 1 (326.547017ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image build -t localhost/my-image:functional-027308 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-027308 image build -t localhost/my-image:functional-027308 testdata/build --alsologtostderr: (2.920696149s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-027308 image build -t localhost/my-image:functional-027308 testdata/build --alsologtostderr:
I0317 13:25:49.164587 1164166 out.go:345] Setting OutFile to fd 1 ...
I0317 13:25:49.165464 1164166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.165481 1164166 out.go:358] Setting ErrFile to fd 2...
I0317 13:25:49.165488 1164166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:25:49.165746 1164166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
I0317 13:25:49.166435 1164166 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.168092 1164166 config.go:182] Loaded profile config "functional-027308": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 13:25:49.168658 1164166 cli_runner.go:164] Run: docker container inspect functional-027308 --format={{.State.Status}}
I0317 13:25:49.192126 1164166 ssh_runner.go:195] Run: systemctl --version
I0317 13:25:49.192182 1164166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-027308
I0317 13:25:49.226408 1164166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/functional-027308/id_rsa Username:docker}
I0317 13:25:49.328624 1164166 build_images.go:161] Building image from path: /tmp/build.1491706327.tar
I0317 13:25:49.328701 1164166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0317 13:25:49.338995 1164166 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1491706327.tar
I0317 13:25:49.347255 1164166 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1491706327.tar: stat -c "%s %y" /var/lib/minikube/build/build.1491706327.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1491706327.tar': No such file or directory
I0317 13:25:49.347290 1164166 ssh_runner.go:362] scp /tmp/build.1491706327.tar --> /var/lib/minikube/build/build.1491706327.tar (3072 bytes)
I0317 13:25:49.379717 1164166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1491706327
I0317 13:25:49.388912 1164166 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1491706327 -xf /var/lib/minikube/build/build.1491706327.tar
I0317 13:25:49.400128 1164166 docker.go:360] Building image: /var/lib/minikube/build/build.1491706327
I0317 13:25:49.400221 1164166 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-027308 /var/lib/minikube/build/build.1491706327
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:04349f3845f7e4df8d6f775574d9b2b18cffcd337b8b1611a0eefc35b707dfbc done
#8 naming to localhost/my-image:functional-027308 done
#8 DONE 0.1s
I0317 13:25:51.994655 1164166 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-027308 /var/lib/minikube/build/build.1491706327: (2.594404516s)
I0317 13:25:51.994729 1164166 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1491706327
I0317 13:25:52.005939 1164166 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1491706327.tar
I0317 13:25:52.017342 1164166 build_images.go:217] Built localhost/my-image:functional-027308 from /tmp/build.1491706327.tar
I0317 13:25:52.017378 1164166 build_images.go:133] succeeded building to: functional-027308
I0317 13:25:52.017384 1164166 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-027308
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image load --daemon kicbase/echo-server:functional-027308 --alsologtostderr
2025/03/17 13:25:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-027308 docker-env) && out/minikube-linux-arm64 status -p functional-027308"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-027308 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image load --daemon kicbase/echo-server:functional-027308 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-027308
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image load --daemon kicbase/echo-server:functional-027308 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image save kicbase/echo-server:functional-027308 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image rm kicbase/echo-server:functional-027308 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-027308
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-027308 image save --daemon kicbase/echo-server:functional-027308 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-027308
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-027308
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-027308
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-027308
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-977034 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0317 13:26:32.595093 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-977034 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m2.633621517s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (42.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-977034 -- rollout status deployment/busybox: (39.511971108s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-225xl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-dbddw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-fktmm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-225xl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-dbddw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-fktmm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-225xl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-dbddw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-fktmm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (42.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-225xl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-225xl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-dbddw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-dbddw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-fktmm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-977034 -- exec busybox-58667487b6-fktmm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-977034 -v=7 --alsologtostderr
E0317 13:28:48.732323 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-977034 -v=7 --alsologtostderr: (24.997908998s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr: (1.042332843s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-977034 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.051699307s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp testdata/cp-test.txt ha-977034:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1383234769/001/cp-test_ha-977034.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034:/home/docker/cp-test.txt ha-977034-m02:/home/docker/cp-test_ha-977034_ha-977034-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test_ha-977034_ha-977034-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034:/home/docker/cp-test.txt ha-977034-m03:/home/docker/cp-test_ha-977034_ha-977034-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test_ha-977034_ha-977034-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034:/home/docker/cp-test.txt ha-977034-m04:/home/docker/cp-test_ha-977034_ha-977034-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test_ha-977034_ha-977034-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp testdata/cp-test.txt ha-977034-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1383234769/001/cp-test_ha-977034-m02.txt
E0317 13:29:16.436512 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m02:/home/docker/cp-test.txt ha-977034:/home/docker/cp-test_ha-977034-m02_ha-977034.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test_ha-977034-m02_ha-977034.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m02:/home/docker/cp-test.txt ha-977034-m03:/home/docker/cp-test_ha-977034-m02_ha-977034-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test_ha-977034-m02_ha-977034-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m02:/home/docker/cp-test.txt ha-977034-m04:/home/docker/cp-test_ha-977034-m02_ha-977034-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test_ha-977034-m02_ha-977034-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp testdata/cp-test.txt ha-977034-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1383234769/001/cp-test_ha-977034-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m03:/home/docker/cp-test.txt ha-977034:/home/docker/cp-test_ha-977034-m03_ha-977034.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test_ha-977034-m03_ha-977034.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m03:/home/docker/cp-test.txt ha-977034-m02:/home/docker/cp-test_ha-977034-m03_ha-977034-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test_ha-977034-m03_ha-977034-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m03:/home/docker/cp-test.txt ha-977034-m04:/home/docker/cp-test_ha-977034-m03_ha-977034-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test_ha-977034-m03_ha-977034-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp testdata/cp-test.txt ha-977034-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1383234769/001/cp-test_ha-977034-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m04:/home/docker/cp-test.txt ha-977034:/home/docker/cp-test_ha-977034-m04_ha-977034.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034 "sudo cat /home/docker/cp-test_ha-977034-m04_ha-977034.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m04:/home/docker/cp-test.txt ha-977034-m02:/home/docker/cp-test_ha-977034-m04_ha-977034-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m02 "sudo cat /home/docker/cp-test_ha-977034-m04_ha-977034-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 cp ha-977034-m04:/home/docker/cp-test.txt ha-977034-m03:/home/docker/cp-test_ha-977034-m04_ha-977034-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 ssh -n ha-977034-m03 "sudo cat /home/docker/cp-test_ha-977034-m04_ha-977034-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-977034 node stop m02 -v=7 --alsologtostderr: (10.982442543s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr: exit status 7 (781.3545ms)

                                                
                                                
-- stdout --
	ha-977034
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-977034-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-977034-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-977034-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:29:40.352198 1187507 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:29:40.352333 1187507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:29:40.352343 1187507 out.go:358] Setting ErrFile to fd 2...
	I0317 13:29:40.352362 1187507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:29:40.352656 1187507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:29:40.352872 1187507 out.go:352] Setting JSON to false
	I0317 13:29:40.352919 1187507 mustload.go:65] Loading cluster: ha-977034
	I0317 13:29:40.352961 1187507 notify.go:220] Checking for updates...
	I0317 13:29:40.353346 1187507 config.go:182] Loaded profile config "ha-977034": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:29:40.353371 1187507 status.go:174] checking status of ha-977034 ...
	I0317 13:29:40.353925 1187507 cli_runner.go:164] Run: docker container inspect ha-977034 --format={{.State.Status}}
	I0317 13:29:40.372548 1187507 status.go:371] ha-977034 host status = "Running" (err=<nil>)
	I0317 13:29:40.372574 1187507 host.go:66] Checking if "ha-977034" exists ...
	I0317 13:29:40.373026 1187507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-977034
	I0317 13:29:40.404296 1187507 host.go:66] Checking if "ha-977034" exists ...
	I0317 13:29:40.404636 1187507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:29:40.404748 1187507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-977034
	I0317 13:29:40.424234 1187507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33756 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/ha-977034/id_rsa Username:docker}
	I0317 13:29:40.513505 1187507 ssh_runner.go:195] Run: systemctl --version
	I0317 13:29:40.517860 1187507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:29:40.530346 1187507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:29:40.608884 1187507 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:72 SystemTime:2025-03-17 13:29:40.598483687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:29:40.609422 1187507 kubeconfig.go:125] found "ha-977034" server: "https://192.168.49.254:8443"
	I0317 13:29:40.609464 1187507 api_server.go:166] Checking apiserver status ...
	I0317 13:29:40.609508 1187507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:29:40.623562 1187507 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2451/cgroup
	I0317 13:29:40.634531 1187507 api_server.go:182] apiserver freezer: "3:freezer:/docker/a949b446f3c98b63fbf3844000fd05c8cc28137a41dc3159e0bb0aeb79ae6bde/kubepods/burstable/pod00d04b1d89ad86f233050227d80034a1/6af5bd9c8bb5582e3ad70fad879204175497eecf9880f8d52f934e516a71414c"
	I0317 13:29:40.634610 1187507 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a949b446f3c98b63fbf3844000fd05c8cc28137a41dc3159e0bb0aeb79ae6bde/kubepods/burstable/pod00d04b1d89ad86f233050227d80034a1/6af5bd9c8bb5582e3ad70fad879204175497eecf9880f8d52f934e516a71414c/freezer.state
	I0317 13:29:40.654020 1187507 api_server.go:204] freezer state: "THAWED"
	I0317 13:29:40.654059 1187507 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0317 13:29:40.661942 1187507 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0317 13:29:40.661970 1187507 status.go:463] ha-977034 apiserver status = Running (err=<nil>)
	I0317 13:29:40.661987 1187507 status.go:176] ha-977034 status: &{Name:ha-977034 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:29:40.662012 1187507 status.go:174] checking status of ha-977034-m02 ...
	I0317 13:29:40.662351 1187507 cli_runner.go:164] Run: docker container inspect ha-977034-m02 --format={{.State.Status}}
	I0317 13:29:40.689186 1187507 status.go:371] ha-977034-m02 host status = "Stopped" (err=<nil>)
	I0317 13:29:40.689206 1187507 status.go:384] host is not running, skipping remaining checks
	I0317 13:29:40.689216 1187507 status.go:176] ha-977034-m02 status: &{Name:ha-977034-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:29:40.689236 1187507 status.go:174] checking status of ha-977034-m03 ...
	I0317 13:29:40.689543 1187507 cli_runner.go:164] Run: docker container inspect ha-977034-m03 --format={{.State.Status}}
	I0317 13:29:40.710959 1187507 status.go:371] ha-977034-m03 host status = "Running" (err=<nil>)
	I0317 13:29:40.710993 1187507 host.go:66] Checking if "ha-977034-m03" exists ...
	I0317 13:29:40.711281 1187507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-977034-m03
	I0317 13:29:40.730662 1187507 host.go:66] Checking if "ha-977034-m03" exists ...
	I0317 13:29:40.730962 1187507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:29:40.731003 1187507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-977034-m03
	I0317 13:29:40.750779 1187507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33766 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/ha-977034-m03/id_rsa Username:docker}
	I0317 13:29:40.846803 1187507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:29:40.865269 1187507 kubeconfig.go:125] found "ha-977034" server: "https://192.168.49.254:8443"
	I0317 13:29:40.865296 1187507 api_server.go:166] Checking apiserver status ...
	I0317 13:29:40.865338 1187507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:29:40.878259 1187507 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2323/cgroup
	I0317 13:29:40.889487 1187507 api_server.go:182] apiserver freezer: "3:freezer:/docker/e2a217eb63a90c3186c459193222ee957d01d03719e8d02c983e2d578d31278b/kubepods/burstable/podefa546b7beccb1d8527936c580f2b324/e17f32aa56c3ba5b92ec3a30e56b764d2b4d86015a4a146611c91deb998c1541"
	I0317 13:29:40.889563 1187507 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e2a217eb63a90c3186c459193222ee957d01d03719e8d02c983e2d578d31278b/kubepods/burstable/podefa546b7beccb1d8527936c580f2b324/e17f32aa56c3ba5b92ec3a30e56b764d2b4d86015a4a146611c91deb998c1541/freezer.state
	I0317 13:29:40.898991 1187507 api_server.go:204] freezer state: "THAWED"
	I0317 13:29:40.899027 1187507 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0317 13:29:40.907130 1187507 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0317 13:29:40.907224 1187507 status.go:463] ha-977034-m03 apiserver status = Running (err=<nil>)
	I0317 13:29:40.907251 1187507 status.go:176] ha-977034-m03 status: &{Name:ha-977034-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:29:40.907297 1187507 status.go:174] checking status of ha-977034-m04 ...
	I0317 13:29:40.907667 1187507 cli_runner.go:164] Run: docker container inspect ha-977034-m04 --format={{.State.Status}}
	I0317 13:29:40.926194 1187507 status.go:371] ha-977034-m04 host status = "Running" (err=<nil>)
	I0317 13:29:40.926226 1187507 host.go:66] Checking if "ha-977034-m04" exists ...
	I0317 13:29:40.926600 1187507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-977034-m04
	I0317 13:29:40.946409 1187507 host.go:66] Checking if "ha-977034-m04" exists ...
	I0317 13:29:40.946925 1187507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:29:40.946999 1187507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-977034-m04
	I0317 13:29:40.964500 1187507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33771 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/ha-977034-m04/id_rsa Username:docker}
	I0317 13:29:41.056789 1187507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:29:41.068097 1187507 status.go:176] ha-977034-m04 status: &{Name:ha-977034-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 node start m02 -v=7 --alsologtostderr
E0317 13:29:58.226272 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:58.232697 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:58.244158 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:58.265520 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:58.306818 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:58.388246 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:58.549698 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:58.871439 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:29:59.513612 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:30:00.795563 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:30:03.357208 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:30:08.478762 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:30:18.720507 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-977034 node start m02 -v=7 --alsologtostderr: (39.164584883s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr: (1.125322694s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.189293452s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (183.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-977034 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-977034 -v=7 --alsologtostderr
E0317 13:30:39.202875 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-977034 -v=7 --alsologtostderr: (34.56708109s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-977034 --wait=true -v=7 --alsologtostderr
E0317 13:31:20.164312 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:32:42.085679 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-977034 --wait=true -v=7 --alsologtostderr: (2m28.412111365s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-977034
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (183.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-977034 node delete m03 -v=7 --alsologtostderr: (10.240254956s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 stop -v=7 --alsologtostderr
E0317 13:33:48.732752 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-977034 stop -v=7 --alsologtostderr: (32.7866242s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr: exit status 7 (110.647823ms)

                                                
                                                
-- stdout --
	ha-977034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-977034-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-977034-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:34:11.421397 1214316 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:34:11.421508 1214316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:34:11.421519 1214316 out.go:358] Setting ErrFile to fd 2...
	I0317 13:34:11.421525 1214316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:34:11.421759 1214316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:34:11.421943 1214316 out.go:352] Setting JSON to false
	I0317 13:34:11.421986 1214316 mustload.go:65] Loading cluster: ha-977034
	I0317 13:34:11.422060 1214316 notify.go:220] Checking for updates...
	I0317 13:34:11.423243 1214316 config.go:182] Loaded profile config "ha-977034": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:34:11.423273 1214316 status.go:174] checking status of ha-977034 ...
	I0317 13:34:11.423921 1214316 cli_runner.go:164] Run: docker container inspect ha-977034 --format={{.State.Status}}
	I0317 13:34:11.440878 1214316 status.go:371] ha-977034 host status = "Stopped" (err=<nil>)
	I0317 13:34:11.440901 1214316 status.go:384] host is not running, skipping remaining checks
	I0317 13:34:11.440907 1214316 status.go:176] ha-977034 status: &{Name:ha-977034 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:34:11.440929 1214316 status.go:174] checking status of ha-977034-m02 ...
	I0317 13:34:11.441233 1214316 cli_runner.go:164] Run: docker container inspect ha-977034-m02 --format={{.State.Status}}
	I0317 13:34:11.458555 1214316 status.go:371] ha-977034-m02 host status = "Stopped" (err=<nil>)
	I0317 13:34:11.458581 1214316 status.go:384] host is not running, skipping remaining checks
	I0317 13:34:11.458588 1214316 status.go:176] ha-977034-m02 status: &{Name:ha-977034-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:34:11.458607 1214316 status.go:174] checking status of ha-977034-m04 ...
	I0317 13:34:11.458916 1214316 cli_runner.go:164] Run: docker container inspect ha-977034-m04 --format={{.State.Status}}
	I0317 13:34:11.485188 1214316 status.go:371] ha-977034-m04 host status = "Stopped" (err=<nil>)
	I0317 13:34:11.485214 1214316 status.go:384] host is not running, skipping remaining checks
	I0317 13:34:11.485222 1214316 status.go:176] ha-977034-m04 status: &{Name:ha-977034-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (88.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-977034 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0317 13:34:58.219005 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:35:25.927648 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-977034 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m27.23448792s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (88.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-977034 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-977034 --control-plane -v=7 --alsologtostderr: (45.387493574s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-977034 status -v=7 --alsologtostderr: (1.017340865s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.050192237s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (33.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-079062 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-079062 --driver=docker  --container-runtime=docker: (33.074491225s)
--- PASS: TestImageBuild/serial/Setup (33.07s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-079062
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-079062: (1.77238179s)
--- PASS: TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-079062
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-079062: (1.025929451s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-079062
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-079062
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-639250 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-639250 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m16.331313294s)
--- PASS: TestJSONOutput/start/Command (76.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-639250 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-639250 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-639250 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-639250 --output=json --user=testUser: (10.963033945s)
--- PASS: TestJSONOutput/stop/Command (10.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-787946 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-787946 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.021395ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d9c7e8d-f6dd-486b-b95c-3bc023e8596f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-787946] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab54d813-520f-4e03-8985-c84b82d4e454","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20539"}}
	{"specversion":"1.0","id":"4f3d5f9d-fbbf-4f18-8fb3-ca7761aefafe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b4cf624-e086-4124-8974-bcf49efcea9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig"}}
	{"specversion":"1.0","id":"4df4a6f0-6f91-42ad-a155-d2adc2fb7536","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube"}}
	{"specversion":"1.0","id":"a49ebfea-907d-43f4-9a8c-4e6de1a0531f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a0c9a9a6-eab0-4bf7-84fc-a02ef1a012cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"24632f2a-f616-4dc9-83d2-5ede80c82518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-787946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-787946
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-678448 --network=
E0317 13:38:48.733167 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-678448 --network=: (31.605260108s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-678448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-678448
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-678448: (2.160240825s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-208502 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-208502 --network=bridge: (28.129166159s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-208502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-208502
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-208502: (2.075746622s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.23s)

                                                
                                    
x
+
TestKicExistingNetwork (32.75s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0317 13:39:50.684131 1120731 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0317 13:39:50.699939 1120731 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0317 13:39:50.700591 1120731 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0317 13:39:50.701329 1120731 cli_runner.go:164] Run: docker network inspect existing-network
W0317 13:39:50.717786 1120731 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0317 13:39:50.717818 1120731 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0317 13:39:50.717837 1120731 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0317 13:39:50.718035 1120731 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0317 13:39:50.734857 1120731 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-509972d2f15a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:ee:dc:aa:ea:d5} reservation:<nil>}
I0317 13:39:50.739088 1120731 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0317 13:39:50.739432 1120731 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016f38b0}
I0317 13:39:50.740082 1120731 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0317 13:39:50.740161 1120731 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0317 13:39:50.809396 1120731 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-425541 --network=existing-network
E0317 13:39:58.223986 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:40:11.797828 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-425541 --network=existing-network: (30.575859964s)
helpers_test.go:175: Cleaning up "existing-network-425541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-425541
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-425541: (2.008945386s)
I0317 13:40:23.411228 1120731 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.75s)

                                                
                                    
x
+
TestKicCustomSubnet (33.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-863058 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-863058 --subnet=192.168.60.0/24: (31.3566474s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-863058 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-863058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-863058
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-863058: (2.163128551s)
--- PASS: TestKicCustomSubnet (33.54s)

                                                
                                    
x
+
TestKicStaticIP (34.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-632861 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-632861 --static-ip=192.168.200.200: (31.773066242s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-632861 ip
helpers_test.go:175: Cleaning up "static-ip-632861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-632861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-632861: (2.140119987s)
--- PASS: TestKicStaticIP (34.09s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.63s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-970052 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-970052 --driver=docker  --container-runtime=docker: (33.57760899s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-972489 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-972489 --driver=docker  --container-runtime=docker: (32.226171645s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-970052
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-972489
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-972489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-972489
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-972489: (2.22230832s)
helpers_test.go:175: Cleaning up "first-970052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-970052
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-970052: (2.224262528s)
--- PASS: TestMinikubeProfile (71.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-388529 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-388529 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.888772868s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-388529 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-390392 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-390392 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.962533805s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-390392 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-388529 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-388529 --alsologtostderr -v=5: (1.480638627s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-390392 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-390392
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-390392: (1.203902053s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.64s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-390392
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-390392: (7.636892544s)
--- PASS: TestMountStart/serial/RestartStopped (8.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-390392 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309686 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0317 13:43:48.732895 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309686 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m13.983813464s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (48.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-309686 -- rollout status deployment/busybox: (3.795319315s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:44:34.407325 1120731 retry.go:31] will retry after 829.193911ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:44:35.386383 1120731 retry.go:31] will retry after 1.473643249s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:44:37.027163 1120731 retry.go:31] will retry after 2.292005312s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:44:39.466499 1120731 retry.go:31] will retry after 2.150356424s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:44:41.766503 1120731 retry.go:31] will retry after 7.375593846s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:44:49.290906 1120731 retry.go:31] will retry after 6.88380048s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:44:56.336617 1120731 retry.go:31] will retry after 6.97077247s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E0317 13:44:58.218549 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0317 13:45:03.465384 1120731 retry.go:31] will retry after 13.556871105s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-4mr76 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-kwxxd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-4mr76 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-kwxxd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-4mr76 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-kwxxd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (48.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-4mr76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-4mr76 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-kwxxd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309686 -- exec busybox-58667487b6-kwxxd -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-309686 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-309686 -v 3 --alsologtostderr: (17.859265687s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-309686 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp testdata/cp-test.txt multinode-309686:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2088212483/001/cp-test_multinode-309686.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686:/home/docker/cp-test.txt multinode-309686-m02:/home/docker/cp-test_multinode-309686_multinode-309686-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m02 "sudo cat /home/docker/cp-test_multinode-309686_multinode-309686-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686:/home/docker/cp-test.txt multinode-309686-m03:/home/docker/cp-test_multinode-309686_multinode-309686-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m03 "sudo cat /home/docker/cp-test_multinode-309686_multinode-309686-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp testdata/cp-test.txt multinode-309686-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2088212483/001/cp-test_multinode-309686-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686-m02:/home/docker/cp-test.txt multinode-309686:/home/docker/cp-test_multinode-309686-m02_multinode-309686.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686 "sudo cat /home/docker/cp-test_multinode-309686-m02_multinode-309686.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686-m02:/home/docker/cp-test.txt multinode-309686-m03:/home/docker/cp-test_multinode-309686-m02_multinode-309686-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m03 "sudo cat /home/docker/cp-test_multinode-309686-m02_multinode-309686-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp testdata/cp-test.txt multinode-309686-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2088212483/001/cp-test_multinode-309686-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686-m03:/home/docker/cp-test.txt multinode-309686:/home/docker/cp-test_multinode-309686-m03_multinode-309686.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686 "sudo cat /home/docker/cp-test_multinode-309686-m03_multinode-309686.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 cp multinode-309686-m03:/home/docker/cp-test.txt multinode-309686-m02:/home/docker/cp-test_multinode-309686-m03_multinode-309686-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 ssh -n multinode-309686-m02 "sudo cat /home/docker/cp-test_multinode-309686-m03_multinode-309686-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-309686 node stop m03: (1.227258814s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309686 status: exit status 7 (537.346315ms)

                                                
                                                
-- stdout --
	multinode-309686
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-309686-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-309686-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr: exit status 7 (510.316892ms)

                                                
                                                
-- stdout --
	multinode-309686
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-309686-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-309686-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:45:51.117683 1290480 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:45:51.117851 1290480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:45:51.117882 1290480 out.go:358] Setting ErrFile to fd 2...
	I0317 13:45:51.117903 1290480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:45:51.118179 1290480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:45:51.118411 1290480 out.go:352] Setting JSON to false
	I0317 13:45:51.118475 1290480 mustload.go:65] Loading cluster: multinode-309686
	I0317 13:45:51.118549 1290480 notify.go:220] Checking for updates...
	I0317 13:45:51.118924 1290480 config.go:182] Loaded profile config "multinode-309686": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:45:51.118971 1290480 status.go:174] checking status of multinode-309686 ...
	I0317 13:45:51.119579 1290480 cli_runner.go:164] Run: docker container inspect multinode-309686 --format={{.State.Status}}
	I0317 13:45:51.140619 1290480 status.go:371] multinode-309686 host status = "Running" (err=<nil>)
	I0317 13:45:51.140647 1290480 host.go:66] Checking if "multinode-309686" exists ...
	I0317 13:45:51.140967 1290480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-309686
	I0317 13:45:51.164925 1290480 host.go:66] Checking if "multinode-309686" exists ...
	I0317 13:45:51.165273 1290480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:45:51.165326 1290480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-309686
	I0317 13:45:51.188658 1290480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/multinode-309686/id_rsa Username:docker}
	I0317 13:45:51.276927 1290480 ssh_runner.go:195] Run: systemctl --version
	I0317 13:45:51.281103 1290480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:45:51.292838 1290480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:45:51.349536 1290480 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-03-17 13:45:51.34073709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1]] Warnings:<nil>}}
	I0317 13:45:51.350072 1290480 kubeconfig.go:125] found "multinode-309686" server: "https://192.168.58.2:8443"
	I0317 13:45:51.350108 1290480 api_server.go:166] Checking apiserver status ...
	I0317 13:45:51.350159 1290480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:45:51.366507 1290480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2372/cgroup
	I0317 13:45:51.375651 1290480 api_server.go:182] apiserver freezer: "3:freezer:/docker/b453d45d72bbc6361ab9c9a1d2c339e9b54e537e5a1196def7e51391d5f0a9fc/kubepods/burstable/podb49cad8e216134fcaca67897586661e1/9c603c3bd2d60e4279ba18a20041f48f43d37a2cc0a0a6e9ab4875a9a48655ad"
	I0317 13:45:51.375729 1290480 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b453d45d72bbc6361ab9c9a1d2c339e9b54e537e5a1196def7e51391d5f0a9fc/kubepods/burstable/podb49cad8e216134fcaca67897586661e1/9c603c3bd2d60e4279ba18a20041f48f43d37a2cc0a0a6e9ab4875a9a48655ad/freezer.state
	I0317 13:45:51.384916 1290480 api_server.go:204] freezer state: "THAWED"
	I0317 13:45:51.384953 1290480 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0317 13:45:51.393109 1290480 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0317 13:45:51.393136 1290480 status.go:463] multinode-309686 apiserver status = Running (err=<nil>)
	I0317 13:45:51.393153 1290480 status.go:176] multinode-309686 status: &{Name:multinode-309686 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:45:51.393175 1290480 status.go:174] checking status of multinode-309686-m02 ...
	I0317 13:45:51.393476 1290480 cli_runner.go:164] Run: docker container inspect multinode-309686-m02 --format={{.State.Status}}
	I0317 13:45:51.411911 1290480 status.go:371] multinode-309686-m02 host status = "Running" (err=<nil>)
	I0317 13:45:51.411936 1290480 host.go:66] Checking if "multinode-309686-m02" exists ...
	I0317 13:45:51.412264 1290480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-309686-m02
	I0317 13:45:51.428794 1290480 host.go:66] Checking if "multinode-309686-m02" exists ...
	I0317 13:45:51.429179 1290480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:45:51.429223 1290480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-309686-m02
	I0317 13:45:51.449929 1290480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/20539-1115410/.minikube/machines/multinode-309686-m02/id_rsa Username:docker}
	I0317 13:45:51.541095 1290480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:45:51.553063 1290480 status.go:176] multinode-309686-m02 status: &{Name:multinode-309686-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:45:51.553100 1290480 status.go:174] checking status of multinode-309686-m03 ...
	I0317 13:45:51.553403 1290480 cli_runner.go:164] Run: docker container inspect multinode-309686-m03 --format={{.State.Status}}
	I0317 13:45:51.572042 1290480 status.go:371] multinode-309686-m03 host status = "Stopped" (err=<nil>)
	I0317 13:45:51.572069 1290480 status.go:384] host is not running, skipping remaining checks
	I0317 13:45:51.572077 1290480 status.go:176] multinode-309686-m03 status: &{Name:multinode-309686-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-309686 node start m03 -v=7 --alsologtostderr: (9.905762417s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-309686
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-309686
E0317 13:46:21.291499 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-309686: (22.598752264s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309686 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309686 --wait=true -v=8 --alsologtostderr: (59.866433313s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-309686
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-309686 node delete m03: (4.676660916s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-309686 stop: (21.316823119s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309686 status: exit status 7 (102.556376ms)

                                                
                                                
-- stdout --
	multinode-309686
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-309686-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr: exit status 7 (103.313345ms)

                                                
                                                
-- stdout --
	multinode-309686
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-309686-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:47:51.712055 1303951 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:47:51.712234 1303951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:47:51.712244 1303951 out.go:358] Setting ErrFile to fd 2...
	I0317 13:47:51.712249 1303951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:47:51.712490 1303951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-1115410/.minikube/bin
	I0317 13:47:51.712671 1303951 out.go:352] Setting JSON to false
	I0317 13:47:51.712703 1303951 mustload.go:65] Loading cluster: multinode-309686
	I0317 13:47:51.712803 1303951 notify.go:220] Checking for updates...
	I0317 13:47:51.713092 1303951 config.go:182] Loaded profile config "multinode-309686": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:47:51.713113 1303951 status.go:174] checking status of multinode-309686 ...
	I0317 13:47:51.714708 1303951 cli_runner.go:164] Run: docker container inspect multinode-309686 --format={{.State.Status}}
	I0317 13:47:51.736888 1303951 status.go:371] multinode-309686 host status = "Stopped" (err=<nil>)
	I0317 13:47:51.736914 1303951 status.go:384] host is not running, skipping remaining checks
	I0317 13:47:51.736921 1303951 status.go:176] multinode-309686 status: &{Name:multinode-309686 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:47:51.736955 1303951 status.go:174] checking status of multinode-309686-m02 ...
	I0317 13:47:51.737252 1303951 cli_runner.go:164] Run: docker container inspect multinode-309686-m02 --format={{.State.Status}}
	I0317 13:47:51.761965 1303951 status.go:371] multinode-309686-m02 host status = "Stopped" (err=<nil>)
	I0317 13:47:51.761984 1303951 status.go:384] host is not running, skipping remaining checks
	I0317 13:47:51.761991 1303951 status.go:176] multinode-309686-m02 status: &{Name:multinode-309686-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (63.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309686 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0317 13:48:48.732408 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309686 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m3.050968639s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309686 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (63.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-309686
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309686-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-309686-m02 --driver=docker  --container-runtime=docker: exit status 14 (95.6619ms)

                                                
                                                
-- stdout --
	* [multinode-309686-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-309686-m02' is duplicated with machine name 'multinode-309686-m02' in profile 'multinode-309686'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309686-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309686-m03 --driver=docker  --container-runtime=docker: (33.809576388s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-309686
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-309686: exit status 80 (323.922026ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-309686 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-309686-m03 already exists in multinode-309686-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-309686-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-309686-m03: (2.125455222s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.42s)

                                                
                                    
x
+
TestPreload (105.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-700680 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0317 13:49:58.218739 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-700680 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m6.761667188s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-700680 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-700680 image pull gcr.io/k8s-minikube/busybox: (2.356424855s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-700680
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-700680: (10.914718889s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-700680 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-700680 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (22.87849507s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-700680 image list
helpers_test.go:175: Cleaning up "test-preload-700680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-700680
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-700680: (2.276446495s)
--- PASS: TestPreload (105.50s)

                                                
                                    
x
+
TestSkaffold (118.45s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3579181299 version
skaffold_test.go:63: skaffold version: v2.14.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-463896 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-463896 --memory=2600 --driver=docker  --container-runtime=docker: (32.60066788s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3579181299 run --minikube-profile skaffold-463896 --kube-context skaffold-463896 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3579181299 run --minikube-profile skaffold-463896 --kube-context skaffold-463896 --status-check=true --port-forward=false --interactive=false: (1m10.106049677s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-8469f9d77f-dm6vr" [06dc619d-f88b-4d0c-8c7a-c3fa8a55e7be] Running
E0317 13:53:48.732606 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002882434s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-d8dfcc4f7-p8zk7" [25a9a3be-c830-463d-a6b6-524039b8e6ec] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003504684s
helpers_test.go:175: Cleaning up "skaffold-463896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-463896
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-463896: (3.024913878s)
--- PASS: TestSkaffold (118.45s)

                                                
                                    
x
+
TestInsufficientStorage (10.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-029187 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-029187 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.434838325s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"24c4bbde-d15d-47e9-a9b5-3bc97ae5503d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-029187] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c5cba7d-4cc6-4f9a-af5b-b741e6ced0fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20539"}}
	{"specversion":"1.0","id":"f05a425c-e405-4d41-8329-d8d315c0ce85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ffdba4fb-42a0-4e23-be4a-c0b5998f05ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig"}}
	{"specversion":"1.0","id":"ca8fc88e-8ec6-46a3-bf2b-b0f83e1a1c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube"}}
	{"specversion":"1.0","id":"de72d5e7-5355-4fbe-9a1b-95a11a7855e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1fbcfb70-6df0-45e2-97e7-8e5ea60bbfdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"62fe9d91-91b2-475f-bd6e-d946317b5045","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"37873a99-54f8-4565-a1d8-5b7eb26e204e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ad14dd20-b345-48ad-97d2-80a1044b1872","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa27b1b7-8421-4b1a-b325-6bdcaf65aae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0dc81fe9-87c1-47d8-a539-8b5ab1a1be02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-029187\" primary control-plane node in \"insufficient-storage-029187\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bab42277-3fba-4a5e-a25b-6a85dbcc11b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1741860993-20523 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"13ac8788-ef90-4cb7-be6b-85bb61a9bc61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf4f5e27-21f5-42f9-bfbe-49383f94e993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-029187 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-029187 --output=json --layout=cluster: exit status 7 (295.085362ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-029187","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-029187","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:54:05.673497 1338105 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-029187" does not appear in /home/jenkins/minikube-integration/20539-1115410/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-029187 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-029187 --output=json --layout=cluster: exit status 7 (291.703445ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-029187","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-029187","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:54:05.966628 1338166 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-029187" does not appear in /home/jenkins/minikube-integration/20539-1115410/kubeconfig
	E0317 13:54:05.977335 1338166 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/insufficient-storage-029187/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-029187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-029187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-029187: (1.73345144s)
--- PASS: TestInsufficientStorage (10.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (92.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1580794996 start -p running-upgrade-136669 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0317 13:59:58.218203 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:00:04.841299 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1580794996 start -p running-upgrade-136669 --memory=2200 --vm-driver=docker  --container-runtime=docker: (40.148906573s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-136669 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-136669 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.969562699s)
helpers_test.go:175: Cleaning up "running-upgrade-136669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-136669
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-136669: (2.693577712s)
--- PASS: TestRunningBinaryUpgrade (92.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m2.766988467s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-670566
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-670566: (1.490651707s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-670566 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-670566 status --format={{.Host}}: exit status 7 (147.764895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0317 13:56:51.800017 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.650035204s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-670566 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (145.297183ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-670566] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-670566
	    minikube start -p kubernetes-upgrade-670566 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6705662 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-670566 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-670566 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.112061729s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-670566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-670566
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-670566: (2.939968902s)
--- PASS: TestKubernetesUpgrade (383.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4226976898 start -p missing-upgrade-085900 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4226976898 start -p missing-upgrade-085900 --memory=2200 --driver=docker  --container-runtime=docker: (1m36.526067576s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-085900
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-085900: (10.445344177s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-085900
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-085900 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-085900 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.704170146s)
helpers_test.go:175: Cleaning up "missing-upgrade-085900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-085900
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-085900: (2.157379833s)
--- PASS: TestMissingContainerUpgrade (164.55s)

                                                
                                    
x
+
TestPause/serial/Start (51.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-023111 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0317 13:54:58.218955 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-023111 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (51.653044902s)
--- PASS: TestPause/serial/Start (51.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-023111 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-023111 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.118553584s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.13s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-023111 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-023111 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-023111 --output=json --layout=cluster: exit status 2 (446.444508ms)

                                                
                                                
-- stdout --
	{"Name":"pause-023111","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-023111","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-023111 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-023111 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.36s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-023111 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-023111 --alsologtostderr -v=5: (2.35633384s)
--- PASS: TestPause/serial/DeletePaused (2.36s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-023111
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-023111: exit status 1 (22.212261ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-023111: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3668515494 start -p stopped-upgrade-223184 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0317 13:58:42.902297 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:42.909489 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:42.920869 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:42.942267 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:42.983636 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:43.065041 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:43.226472 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:43.548180 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:44.190188 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:45.472207 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:48.033558 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:48.732478 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:58:53.155218 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:59:03.397408 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3668515494 start -p stopped-upgrade-223184 --memory=2200 --vm-driver=docker  --container-runtime=docker: (39.625519269s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3668515494 -p stopped-upgrade-223184 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3668515494 -p stopped-upgrade-223184 stop: (10.896678818s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-223184 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0317 13:59:23.878722 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-223184 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.153094384s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-223184
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-223184: (1.335464472s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-767621 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-767621 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (136.916198ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-767621] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-1115410/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-1115410/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-767621 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-767621 --driver=docker  --container-runtime=docker: (39.245165028s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-767621 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-767621 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-767621 --no-kubernetes --driver=docker  --container-runtime=docker: (18.347944983s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-767621 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-767621 status -o json: exit status 2 (372.343161ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-767621","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-767621
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-767621: (2.296139311s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-767621 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-767621 --no-kubernetes --driver=docker  --container-runtime=docker: (10.268868898s)
--- PASS: TestNoKubernetes/serial/Start (10.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-767621 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-767621 "sudo systemctl is-active --quiet service kubelet": exit status 1 (338.97168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-767621
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-767621: (1.256994398s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-767621 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-767621 --driver=docker  --container-runtime=docker: (8.808960057s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-767621 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-767621 "sudo systemctl is-active --quiet service kubelet": exit status 1 (431.862465ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-927846 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-927846 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m33.005453622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-927846 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f82d1f65-3788-460f-aa08-5dc5fb31df1d] Pending
helpers_test.go:344: "busybox" [f82d1f65-3788-460f-aa08-5dc5fb31df1d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f82d1f65-3788-460f-aa08-5dc5fb31df1d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0041718s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-927846 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-927846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-927846 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-927846 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-927846 --alsologtostderr -v=3: (11.12953088s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-927846 -n old-k8s-version-927846
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-927846 -n old-k8s-version-927846: exit status 7 (81.378271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-927846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (122.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-927846 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-927846 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m2.297903837s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-927846 -n old-k8s-version-927846
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (122.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-610573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0317 14:08:42.903053 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:08:48.732315 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-610573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m0.814367787s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-610573 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [718933d9-48bc-473e-9820-1336de473764] Pending
helpers_test.go:344: "busybox" [718933d9-48bc-473e-9820-1336de473764] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [718933d9-48bc-473e-9820-1336de473764] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003814177s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-610573 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-610573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-610573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030823894s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-610573 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-610573 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-610573 --alsologtostderr -v=3: (10.831809259s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-610573 -n no-preload-610573
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-610573 -n no-preload-610573: exit status 7 (67.550082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-610573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-610573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-610573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m26.168777596s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-610573 -n no-preload-610573
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h55d4" [5abac917-06c5-4c23-a758-87d3b8065d42] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003935267s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h55d4" [5abac917-06c5-4c23-a758-87d3b8065d42] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004405976s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-927846 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-927846 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-927846 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-927846 --alsologtostderr -v=1: (1.101367493s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-927846 -n old-k8s-version-927846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-927846 -n old-k8s-version-927846: exit status 2 (509.898442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-927846 -n old-k8s-version-927846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-927846 -n old-k8s-version-927846: exit status 2 (477.020596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-927846 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-927846 --alsologtostderr -v=1: (1.007365618s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-927846 -n old-k8s-version-927846
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-927846 -n old-k8s-version-927846
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-027503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0317 14:09:58.218334 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-027503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m13.260120676s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-027503 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1dd8278-b3da-416b-960b-c468ae57539e] Pending
helpers_test.go:344: "busybox" [b1dd8278-b3da-416b-960b-c468ae57539e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1dd8278-b3da-416b-960b-c468ae57539e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003598562s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-027503 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-027503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-027503 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-027503 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-027503 --alsologtostderr -v=3: (10.945650401s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-027503 -n embed-certs-027503
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-027503 -n embed-certs-027503: exit status 7 (94.081447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-027503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (274.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-027503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0317 14:11:51.882473 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:51.888934 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:51.900340 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:51.921714 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:51.963700 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:52.045252 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:52.206813 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:52.528528 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:53.170670 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:54.452391 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:11:57.014009 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:12:02.135896 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:12:12.377219 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:12:32.858704 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:13:13.820989 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:13:31.801747 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-027503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m33.530400772s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-027503 -n embed-certs-027503
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (274.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ws9tg" [4f40cd80-f6bf-4fde-9925-0b453f19b847] Running
E0317 14:13:42.902897 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002794091s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ws9tg" [4f40cd80-f6bf-4fde-9925-0b453f19b847] Running
E0317 14:13:48.732504 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003257158s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-610573 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-610573 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-610573 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-610573 -n no-preload-610573
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-610573 -n no-preload-610573: exit status 2 (329.441944ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-610573 -n no-preload-610573
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-610573 -n no-preload-610573: exit status 2 (323.488673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-610573 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-610573 -n no-preload-610573
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-610573 -n no-preload-610573
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-713589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0317 14:14:35.743286 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:14:58.218059 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:15:05.967840 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-713589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m12.07587233s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-713589 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [70f8e572-6fef-4576-9f3d-36ed9c365cf6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [70f8e572-6fef-4576-9f3d-36ed9c365cf6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00281502s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-713589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-713589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-713589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047650739s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-713589 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-713589 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-713589 --alsologtostderr -v=3: (11.005854645s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589: exit status 7 (92.651167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-713589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-713589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-713589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m27.917921988s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rz8vx" [d36cf516-2aab-49a9-af7e-5f74af3fd80b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005722611s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rz8vx" [d36cf516-2aab-49a9-af7e-5f74af3fd80b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003603362s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-027503 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-027503 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-027503 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-027503 -n embed-certs-027503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-027503 -n embed-certs-027503: exit status 2 (332.878191ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-027503 -n embed-certs-027503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-027503 -n embed-certs-027503: exit status 2 (349.603742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-027503 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-027503 -n embed-certs-027503
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-027503 -n embed-certs-027503
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-229849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-229849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (39.619658818s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-229849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-229849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.191126467s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-229849 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-229849 --alsologtostderr -v=3: (5.881831761s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229849 -n newest-cni-229849
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229849 -n newest-cni-229849: exit status 7 (72.930918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-229849 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-229849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0317 14:16:51.882733 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-229849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (17.742980933s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229849 -n newest-cni-229849
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-229849 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-229849 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229849 -n newest-cni-229849
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229849 -n newest-cni-229849: exit status 2 (397.345919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-229849 -n newest-cni-229849
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-229849 -n newest-cni-229849: exit status 2 (436.817319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-229849 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229849 -n newest-cni-229849
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-229849 -n newest-cni-229849
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0317 14:17:19.585182 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (46.558098719s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-996220 "pgrep -a kubelet"
I0317 14:18:00.735722 1120731 config.go:182] Loaded profile config "auto-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-996220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hps6c" [7de311f9-6991-4ada-9ca6-11de692af6ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hps6c" [7de311f9-6991-4ada-9ca6-11de692af6ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005152202s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0317 14:18:42.902412 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/skaffold-463896/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:48.733198 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:51.878907 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:51.885298 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:51.896629 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:51.917884 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:51.959284 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:52.040598 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:52.202029 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:52.524192 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:53.165873 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:54.447152 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:18:57.008991 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:19:02.131632 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:19:12.373353 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:19:32.855486 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:19:41.294427 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m8.654151984s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hrcbv" [9cb53059-6e0b-4da9-a84f-41d8260ed12c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007345726s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-996220 "pgrep -a kubelet"
I0317 14:19:48.038231 1120731 config.go:182] Loaded profile config "kindnet-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-996220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-n7ls6" [c440af40-d059-48a8-9621-523b6487eeb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-n7ls6" [c440af40-d059-48a8-9621-523b6487eeb8] Running
E0317 14:19:58.218084 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.002669389s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-n7m4c" [c9e19264-6efc-4519-af05-2fb9aabfa6d7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003201195s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-n7m4c" [c9e19264-6efc-4519-af05-2fb9aabfa6d7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011592259s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-713589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-713589 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-713589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589: exit status 2 (442.400993ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589: exit status 2 (397.714491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-713589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-713589 -n default-k8s-diff-port-713589
E0317 14:20:13.817286 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)
E0317 14:26:31.079402 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:31.979276 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:31.985719 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:31.997286 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:32.018806 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:32.060357 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:32.141809 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:32.303305 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:32.624982 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:33.267124 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:34.548404 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:37.110634 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:42.233002 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:47.924312 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:47.930725 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:47.942214 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:47.963595 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:48.007906 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:48.089347 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:48.251042 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:48.572678 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:49.214107 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:50.495443 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:51.882278 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:52.474940 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/custom-flannel-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:26:53.056749 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m30.465294557s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.860328705s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-996220 "pgrep -a kubelet"
I0317 14:21:31.561006 1120731 config.go:182] Loaded profile config "custom-flannel-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-996220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nxw5m" [a5ca1e61-ebc3-469e-932a-c1c66eff6780] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0317 14:21:35.740995 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-nxw5m" [a5ca1e61-ebc3-469e-932a-c1c66eff6780] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004338884s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xtbf7" [fe28f05c-55bd-4ff6-8188-13f92a4eae32] Running
E0317 14:21:51.882647 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/old-k8s-version-927846/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008489924s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-996220 "pgrep -a kubelet"
I0317 14:21:54.304151 1120731 config.go:182] Loaded profile config "calico-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-996220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jcgld" [b76fb854-ae7d-464a-84c1-39ffa65e981a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jcgld" [b76fb854-ae7d-464a-84c1-39ffa65e981a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004478539s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (76.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m16.881374374s)
--- PASS: TestNetworkPlugins/group/false/Start (76.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0317 14:23:00.987730 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:00.994026 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:01.005351 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:01.026667 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:01.068041 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:01.149354 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:01.310906 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:01.632605 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:02.274547 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:03.556450 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:06.117803 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:11.239430 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:23:21.481008 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m15.788022338s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-996220 "pgrep -a kubelet"
I0317 14:23:25.317677 1120731 config.go:182] Loaded profile config "false-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-996220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mdbth" [2ddb2ac8-3aef-43c2-90be-ae7dce8b30db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-mdbth" [2ddb2ac8-3aef-43c2-90be-ae7dce8b30db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.00394691s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-996220 "pgrep -a kubelet"
I0317 14:23:48.633672 1120731 config.go:182] Loaded profile config "enable-default-cni-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-996220 replace --force -f testdata/netcat-deployment.yaml
E0317 14:23:48.732770 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/addons-464596/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wf6th" [a59b4006-b0c5-4276-a197-2d20fcb67457] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0317 14:23:51.878861 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/no-preload-610573/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-wf6th" [a59b4006-b0c5-4276-a197-2d20fcb67457] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.002743651s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (59.13100758s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0317 14:24:41.605504 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:41.611902 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:41.623205 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:41.644999 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:41.686391 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:41.768609 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:41.930347 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:42.251904 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:42.893237 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:44.174619 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:46.736232 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:24:51.858320 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m19.075434049s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ksz7h" [98a9759d-0070-4b5d-8eb6-1aefe7e56f92] Running
E0317 14:24:58.218148 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/functional-027308/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007196108s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-996220 "pgrep -a kubelet"
I0317 14:25:02.048505 1120731 config.go:182] Loaded profile config "flannel-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-996220 replace --force -f testdata/netcat-deployment.yaml
E0317 14:25:02.101049 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/kindnet-996220/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-74rd6" [fcf8d780-d11b-40b2-8ac0-eec9d2a22985] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-74rd6" [fcf8d780-d11b-40b2-8ac0-eec9d2a22985] Running
E0317 14:25:09.138732 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:09.145063 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:09.156410 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:09.177728 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:09.219085 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:09.300405 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:09.462435 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:09.783783 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:10.425526 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:25:11.707380 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003410475s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (77.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0317 14:25:44.852120 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/auto-996220/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-996220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m17.850391316s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (77.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-996220 "pgrep -a kubelet"
I0317 14:25:45.275090 1120731 config.go:182] Loaded profile config "bridge-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-996220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4r844" [295acf44-f857-4655-9832-3f5b290fd0f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0317 14:25:50.117125 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/default-k8s-diff-port-713589/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-4r844" [295acf44-f857-4655-9832-3f5b290fd0f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003639957s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-996220 "pgrep -a kubelet"
I0317 14:26:53.395986 1120731 config.go:182] Loaded profile config "kubenet-996220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-996220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tk7fc" [8a0fc3b1-dd57-4538-b605-c634da1211ed] Pending
E0317 14:26:58.178925 1120731 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-1115410/.minikube/profiles/calico-996220/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-tk7fc" [8a0fc3b1-dd57-4538-b605-c634da1211ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004248974s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-996220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-996220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    

Test skip (26/346)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-393601 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-393601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-393601
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-859438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-859438
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-996220 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-996220" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-996220

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-996220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-996220"

                                                
                                                
----------------------- debugLogs end: cilium-996220 [took: 5.502162346s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-996220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-996220
--- SKIP: TestNetworkPlugins/group/cilium (5.73s)

                                                
                                    
Copied to clipboard