Test Report: Docker_Linux_containerd_arm64 21504

                    
                      3892f90e7d746f1b37c491f3707229f264f0f5da:2025-09-08:41335
                    
                

Test fail (1/332)

Order failed test Duration
250 TestScheduledStopUnix 33.66
x
+
TestScheduledStopUnix (33.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-160137 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-160137 --memory=3072 --driver=docker  --container-runtime=containerd: (28.688364423s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-160137 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-160137 -n scheduled-stop-160137
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-160137 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 155217 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-09-08 14:02:21.391045925 +0000 UTC m=+2110.549390162
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-160137
helpers_test.go:243: (dbg) docker inspect scheduled-stop-160137:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fad8a2923aa0ef49116e4863d35177de96cf07ada764280017c35b4ee50735ed",
	        "Created": "2025-09-08T14:01:57.45059943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T14:01:57.515622336Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/fad8a2923aa0ef49116e4863d35177de96cf07ada764280017c35b4ee50735ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fad8a2923aa0ef49116e4863d35177de96cf07ada764280017c35b4ee50735ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/fad8a2923aa0ef49116e4863d35177de96cf07ada764280017c35b4ee50735ed/hosts",
	        "LogPath": "/var/lib/docker/containers/fad8a2923aa0ef49116e4863d35177de96cf07ada764280017c35b4ee50735ed/fad8a2923aa0ef49116e4863d35177de96cf07ada764280017c35b4ee50735ed-json.log",
	        "Name": "/scheduled-stop-160137",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-160137:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-160137",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fad8a2923aa0ef49116e4863d35177de96cf07ada764280017c35b4ee50735ed",
	                "LowerDir": "/var/lib/docker/overlay2/51b0c657769dfb7da9eaf7681fc76880e551f44769ec054772818620c83f843d-init/diff:/var/lib/docker/overlay2/81b144fe83a3a806b065a20c9a28409512052a83c9af991906fac9b66cb41fc1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51b0c657769dfb7da9eaf7681fc76880e551f44769ec054772818620c83f843d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51b0c657769dfb7da9eaf7681fc76880e551f44769ec054772818620c83f843d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51b0c657769dfb7da9eaf7681fc76880e551f44769ec054772818620c83f843d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-160137",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-160137/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-160137",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-160137",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-160137",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aafbcbe00806389039c829e296092fef372b111d4f2a60400275c3581822a5b",
	            "SandboxKey": "/var/run/docker/netns/4aafbcbe0080",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-160137": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:ac:3e:36:bd:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4ffe997009a8131c2e972d7bed8c324037b15ade543dadfcd261ae01c2938720",
	                    "EndpointID": "2514fe1f7662623df3ef72e79d5bdd86bb4adb9fadc9129a07d4f3560e408485",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-160137",
	                        "fad8a2923aa0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-160137 -n scheduled-stop-160137
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-160137 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-160137 logs -n 25: (1.259179582s)
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-789083                                                                                                                                             │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ start   │ -p multinode-789083 --wait=true -v=5 --alsologtostderr                                                                                                          │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:57 UTC │
	│ node    │ list -p multinode-789083                                                                                                                                        │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │                     │
	│ node    │ multinode-789083 node delete m03                                                                                                                                │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ stop    │ multinode-789083 stop                                                                                                                                           │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ start   │ -p multinode-789083 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd                                                          │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:58 UTC │
	│ node    │ list -p multinode-789083                                                                                                                                        │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ start   │ -p multinode-789083-m02 --driver=docker  --container-runtime=containerd                                                                                         │ multinode-789083-m02  │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ start   │ -p multinode-789083-m03 --driver=docker  --container-runtime=containerd                                                                                         │ multinode-789083-m03  │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:59 UTC │
	│ node    │ add -p multinode-789083                                                                                                                                         │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:59 UTC │                     │
	│ delete  │ -p multinode-789083-m03                                                                                                                                         │ multinode-789083-m03  │ jenkins │ v1.36.0 │ 08 Sep 25 13:59 UTC │ 08 Sep 25 13:59 UTC │
	│ delete  │ -p multinode-789083                                                                                                                                             │ multinode-789083      │ jenkins │ v1.36.0 │ 08 Sep 25 13:59 UTC │ 08 Sep 25 13:59 UTC │
	│ start   │ -p test-preload-053548 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0 │ test-preload-053548   │ jenkins │ v1.36.0 │ 08 Sep 25 13:59 UTC │ 08 Sep 25 14:00 UTC │
	│ image   │ test-preload-053548 image pull gcr.io/k8s-minikube/busybox                                                                                                      │ test-preload-053548   │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ stop    │ -p test-preload-053548                                                                                                                                          │ test-preload-053548   │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:00 UTC │
	│ start   │ -p test-preload-053548 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd                                         │ test-preload-053548   │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:01 UTC │
	│ image   │ test-preload-053548 image list                                                                                                                                  │ test-preload-053548   │ jenkins │ v1.36.0 │ 08 Sep 25 14:01 UTC │ 08 Sep 25 14:01 UTC │
	│ delete  │ -p test-preload-053548                                                                                                                                          │ test-preload-053548   │ jenkins │ v1.36.0 │ 08 Sep 25 14:01 UTC │ 08 Sep 25 14:01 UTC │
	│ start   │ -p scheduled-stop-160137 --memory=3072 --driver=docker  --container-runtime=containerd                                                                          │ scheduled-stop-160137 │ jenkins │ v1.36.0 │ 08 Sep 25 14:01 UTC │ 08 Sep 25 14:02 UTC │
	│ stop    │ -p scheduled-stop-160137 --schedule 5m                                                                                                                          │ scheduled-stop-160137 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ stop    │ -p scheduled-stop-160137 --schedule 5m                                                                                                                          │ scheduled-stop-160137 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ stop    │ -p scheduled-stop-160137 --schedule 5m                                                                                                                          │ scheduled-stop-160137 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ stop    │ -p scheduled-stop-160137 --schedule 15s                                                                                                                         │ scheduled-stop-160137 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ stop    │ -p scheduled-stop-160137 --schedule 15s                                                                                                                         │ scheduled-stop-160137 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ stop    │ -p scheduled-stop-160137 --schedule 15s                                                                                                                         │ scheduled-stop-160137 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 14:01:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 14:01:52.231146  152809 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:01:52.231250  152809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:01:52.231254  152809 out.go:374] Setting ErrFile to fd 2...
	I0908 14:01:52.231257  152809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:01:52.231509  152809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 14:01:52.231895  152809 out.go:368] Setting JSON to false
	I0908 14:01:52.232712  152809 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2663,"bootTime":1757337450,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0908 14:01:52.232786  152809 start.go:140] virtualization:  
	I0908 14:01:52.236576  152809 out.go:179] * [scheduled-stop-160137] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 14:01:52.241036  152809 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 14:01:52.241121  152809 notify.go:220] Checking for updates...
	I0908 14:01:52.247632  152809 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:01:52.250841  152809 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	I0908 14:01:52.254041  152809 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	I0908 14:01:52.257294  152809 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:01:52.260470  152809 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:01:52.263622  152809 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:01:52.296748  152809 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:01:52.296857  152809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:01:52.351365  152809 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-08 14:01:52.341399291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:01:52.351467  152809 docker.go:318] overlay module found
	I0908 14:01:52.354699  152809 out.go:179] * Using the docker driver based on user configuration
	I0908 14:01:52.357566  152809 start.go:304] selected driver: docker
	I0908 14:01:52.357582  152809 start.go:918] validating driver "docker" against <nil>
	I0908 14:01:52.357594  152809 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:01:52.358358  152809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:01:52.413248  152809 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-08 14:01:52.403566955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:01:52.413395  152809 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 14:01:52.413606  152809 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 14:01:52.416768  152809 out.go:179] * Using Docker driver with root privileges
	I0908 14:01:52.419819  152809 cni.go:84] Creating CNI manager for ""
	I0908 14:01:52.419889  152809 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 14:01:52.419896  152809 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 14:01:52.419984  152809 start.go:348] cluster config:
	{Name:scheduled-stop-160137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-160137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:01:52.423180  152809 out.go:179] * Starting "scheduled-stop-160137" primary control-plane node in "scheduled-stop-160137" cluster
	I0908 14:01:52.426087  152809 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 14:01:52.429026  152809 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 14:01:52.431960  152809 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:01:52.431997  152809 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 14:01:52.432024  152809 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0908 14:01:52.432032  152809 cache.go:58] Caching tarball of preloaded images
	I0908 14:01:52.432113  152809 preload.go:172] Found /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 14:01:52.432122  152809 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0908 14:01:52.432516  152809 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/config.json ...
	I0908 14:01:52.432539  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/config.json: {Name:mkf37b0caa8bf2a0d7488d00ebef3e592b60badd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:01:52.451840  152809 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 14:01:52.451853  152809 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 14:01:52.451865  152809 cache.go:232] Successfully downloaded all kic artifacts
	I0908 14:01:52.451895  152809 start.go:360] acquireMachinesLock for scheduled-stop-160137: {Name:mk2fd4b010769ec91004603a4e0ce667bfec7e77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:01:52.452005  152809 start.go:364] duration metric: took 94.893µs to acquireMachinesLock for "scheduled-stop-160137"
	I0908 14:01:52.452029  152809 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-160137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-160137 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 14:01:52.452091  152809 start.go:125] createHost starting for "" (driver="docker")
	I0908 14:01:52.455592  152809 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0908 14:01:52.455883  152809 start.go:159] libmachine.API.Create for "scheduled-stop-160137" (driver="docker")
	I0908 14:01:52.455923  152809 client.go:168] LocalClient.Create starting
	I0908 14:01:52.456002  152809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca.pem
	I0908 14:01:52.456040  152809 main.go:141] libmachine: Decoding PEM data...
	I0908 14:01:52.456052  152809 main.go:141] libmachine: Parsing certificate...
	I0908 14:01:52.456118  152809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-2314/.minikube/certs/cert.pem
	I0908 14:01:52.456137  152809 main.go:141] libmachine: Decoding PEM data...
	I0908 14:01:52.456145  152809 main.go:141] libmachine: Parsing certificate...
	I0908 14:01:52.456592  152809 cli_runner.go:164] Run: docker network inspect scheduled-stop-160137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 14:01:52.473090  152809 cli_runner.go:211] docker network inspect scheduled-stop-160137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 14:01:52.473161  152809 network_create.go:284] running [docker network inspect scheduled-stop-160137] to gather additional debugging logs...
	I0908 14:01:52.473178  152809 cli_runner.go:164] Run: docker network inspect scheduled-stop-160137
	W0908 14:01:52.492160  152809 cli_runner.go:211] docker network inspect scheduled-stop-160137 returned with exit code 1
	I0908 14:01:52.492179  152809 network_create.go:287] error running [docker network inspect scheduled-stop-160137]: docker network inspect scheduled-stop-160137: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-160137 not found
	I0908 14:01:52.492191  152809 network_create.go:289] output of [docker network inspect scheduled-stop-160137]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-160137 not found
	
	** /stderr **
	I0908 14:01:52.492304  152809 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:01:52.509292  152809 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-431c1a61966e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:58:d5:96:47:2e} reservation:<nil>}
	I0908 14:01:52.509536  152809 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2cac6205be69 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:0a:55:ef:54:5c} reservation:<nil>}
	I0908 14:01:52.509770  152809 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e7f77c37dc8f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:d9:85:22:06:db} reservation:<nil>}
	I0908 14:01:52.510095  152809 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193bcf0}
	I0908 14:01:52.510127  152809 network_create.go:124] attempt to create docker network scheduled-stop-160137 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0908 14:01:52.510185  152809 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-160137 scheduled-stop-160137
	I0908 14:01:52.568329  152809 network_create.go:108] docker network scheduled-stop-160137 192.168.76.0/24 created
	I0908 14:01:52.568528  152809 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-160137" container
	I0908 14:01:52.568628  152809 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 14:01:52.583778  152809 cli_runner.go:164] Run: docker volume create scheduled-stop-160137 --label name.minikube.sigs.k8s.io=scheduled-stop-160137 --label created_by.minikube.sigs.k8s.io=true
	I0908 14:01:52.603002  152809 oci.go:103] Successfully created a docker volume scheduled-stop-160137
	I0908 14:01:52.603085  152809 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-160137-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-160137 --entrypoint /usr/bin/test -v scheduled-stop-160137:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 14:01:53.080698  152809 oci.go:107] Successfully prepared a docker volume scheduled-stop-160137
	I0908 14:01:53.080750  152809 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:01:53.080769  152809 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 14:01:53.080844  152809 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-160137:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 14:01:57.372340  152809 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-160137:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.291455969s)
	I0908 14:01:57.372373  152809 kic.go:203] duration metric: took 4.291601029s to extract preloaded images to volume ...
	W0908 14:01:57.372509  152809 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 14:01:57.372607  152809 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 14:01:57.432944  152809 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-160137 --name scheduled-stop-160137 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-160137 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-160137 --network scheduled-stop-160137 --ip 192.168.76.2 --volume scheduled-stop-160137:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 14:01:57.763178  152809 cli_runner.go:164] Run: docker container inspect scheduled-stop-160137 --format={{.State.Running}}
	I0908 14:01:57.784330  152809 cli_runner.go:164] Run: docker container inspect scheduled-stop-160137 --format={{.State.Status}}
	I0908 14:01:57.814412  152809 cli_runner.go:164] Run: docker exec scheduled-stop-160137 stat /var/lib/dpkg/alternatives/iptables
	I0908 14:01:57.871946  152809 oci.go:144] the created container "scheduled-stop-160137" has a running status.
	I0908 14:01:57.871977  152809 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa...
	I0908 14:01:58.211954  152809 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 14:01:58.238111  152809 cli_runner.go:164] Run: docker container inspect scheduled-stop-160137 --format={{.State.Status}}
	I0908 14:01:58.265116  152809 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 14:01:58.265127  152809 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-160137 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 14:01:58.325851  152809 cli_runner.go:164] Run: docker container inspect scheduled-stop-160137 --format={{.State.Status}}
	I0908 14:01:58.353799  152809 machine.go:93] provisionDockerMachine start ...
	I0908 14:01:58.353881  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:58.384629  152809 main.go:141] libmachine: Using SSH client type: native
	I0908 14:01:58.384963  152809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0908 14:01:58.384971  152809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:01:58.562625  152809 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-160137
	
	I0908 14:01:58.562643  152809 ubuntu.go:182] provisioning hostname "scheduled-stop-160137"
	I0908 14:01:58.562722  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:58.593437  152809 main.go:141] libmachine: Using SSH client type: native
	I0908 14:01:58.593803  152809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0908 14:01:58.593814  152809 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-160137 && echo "scheduled-stop-160137" | sudo tee /etc/hostname
	I0908 14:01:58.769812  152809 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-160137
	
	I0908 14:01:58.769892  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:58.792207  152809 main.go:141] libmachine: Using SSH client type: native
	I0908 14:01:58.792537  152809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0908 14:01:58.792553  152809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-160137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-160137/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-160137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:01:58.924508  152809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:01:58.924522  152809 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21504-2314/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-2314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-2314/.minikube}
	I0908 14:01:58.924551  152809 ubuntu.go:190] setting up certificates
	I0908 14:01:58.924561  152809 provision.go:84] configureAuth start
	I0908 14:01:58.924630  152809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-160137
	I0908 14:01:58.941779  152809 provision.go:143] copyHostCerts
	I0908 14:01:58.941828  152809 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-2314/.minikube/ca.pem, removing ...
	I0908 14:01:58.941836  152809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-2314/.minikube/ca.pem
	I0908 14:01:58.941893  152809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-2314/.minikube/ca.pem (1078 bytes)
	I0908 14:01:58.941968  152809 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-2314/.minikube/cert.pem, removing ...
	I0908 14:01:58.941972  152809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-2314/.minikube/cert.pem
	I0908 14:01:58.941996  152809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-2314/.minikube/cert.pem (1123 bytes)
	I0908 14:01:58.942045  152809 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-2314/.minikube/key.pem, removing ...
	I0908 14:01:58.942048  152809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-2314/.minikube/key.pem
	I0908 14:01:58.942072  152809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-2314/.minikube/key.pem (1679 bytes)
	I0908 14:01:58.942115  152809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-2314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-160137 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-160137]
	I0908 14:01:59.274235  152809 provision.go:177] copyRemoteCerts
	I0908 14:01:59.274295  152809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:01:59.274331  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:59.293937  152809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa Username:docker}
	I0908 14:01:59.385570  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 14:01:59.410850  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0908 14:01:59.435721  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 14:01:59.461829  152809 provision.go:87] duration metric: took 537.245887ms to configureAuth
	I0908 14:01:59.461858  152809 ubuntu.go:206] setting minikube options for container-runtime
	I0908 14:01:59.462040  152809 config.go:182] Loaded profile config "scheduled-stop-160137": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:01:59.462046  152809 machine.go:96] duration metric: took 1.108237196s to provisionDockerMachine
	I0908 14:01:59.462052  152809 client.go:171] duration metric: took 7.006123748s to LocalClient.Create
	I0908 14:01:59.462082  152809 start.go:167] duration metric: took 7.006192836s to libmachine.API.Create "scheduled-stop-160137"
	I0908 14:01:59.462089  152809 start.go:293] postStartSetup for "scheduled-stop-160137" (driver="docker")
	I0908 14:01:59.462097  152809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:01:59.462149  152809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:01:59.462185  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:59.479418  152809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa Username:docker}
	I0908 14:01:59.569848  152809 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:01:59.573039  152809 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 14:01:59.573062  152809 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 14:01:59.573071  152809 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 14:01:59.573077  152809 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 14:01:59.573090  152809 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-2314/.minikube/addons for local assets ...
	I0908 14:01:59.573159  152809 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-2314/.minikube/files for local assets ...
	I0908 14:01:59.573238  152809 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-2314/.minikube/files/etc/ssl/certs/41182.pem -> 41182.pem in /etc/ssl/certs
	I0908 14:01:59.573338  152809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:01:59.581926  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/files/etc/ssl/certs/41182.pem --> /etc/ssl/certs/41182.pem (1708 bytes)
	I0908 14:01:59.606671  152809 start.go:296] duration metric: took 144.567521ms for postStartSetup
	I0908 14:01:59.607071  152809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-160137
	I0908 14:01:59.624820  152809 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/config.json ...
	I0908 14:01:59.625113  152809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:01:59.625154  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:59.642235  152809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa Username:docker}
	I0908 14:01:59.729137  152809 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 14:01:59.733593  152809 start.go:128] duration metric: took 7.28148785s to createHost
	I0908 14:01:59.733607  152809 start.go:83] releasing machines lock for "scheduled-stop-160137", held for 7.281595289s
	I0908 14:01:59.733675  152809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-160137
	I0908 14:01:59.754817  152809 ssh_runner.go:195] Run: cat /version.json
	I0908 14:01:59.754860  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:59.755104  152809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:01:59.755161  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:01:59.774691  152809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa Username:docker}
	I0908 14:01:59.776090  152809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa Username:docker}
	I0908 14:01:59.991598  152809 ssh_runner.go:195] Run: systemctl --version
	I0908 14:01:59.995610  152809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 14:01:59.999686  152809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 14:02:00.088865  152809 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 14:02:00.088946  152809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:02:00.177861  152809 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 14:02:00.177876  152809 start.go:495] detecting cgroup driver to use...
	I0908 14:02:00.177914  152809 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 14:02:00.177978  152809 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0908 14:02:00.207455  152809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 14:02:00.228552  152809 docker.go:218] disabling cri-docker service (if available) ...
	I0908 14:02:00.228627  152809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 14:02:00.248270  152809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 14:02:00.275857  152809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 14:02:00.392804  152809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 14:02:00.491175  152809 docker.go:234] disabling docker service ...
	I0908 14:02:00.491237  152809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 14:02:00.515385  152809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 14:02:00.529411  152809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 14:02:00.618870  152809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 14:02:00.709460  152809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:02:00.721147  152809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:02:00.738459  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 14:02:00.748818  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 14:02:00.759282  152809 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 14:02:00.759344  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 14:02:00.770032  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:02:00.780126  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 14:02:00.790252  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:02:00.800223  152809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:02:00.809501  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 14:02:00.820116  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 14:02:00.830761  152809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 14:02:00.841005  152809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:02:00.850071  152809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:02:00.859141  152809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:02:00.939630  152809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 14:02:01.077077  152809 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0908 14:02:01.077142  152809 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0908 14:02:01.081876  152809 start.go:563] Will wait 60s for crictl version
	I0908 14:02:01.081936  152809 ssh_runner.go:195] Run: which crictl
	I0908 14:02:01.085540  152809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:02:01.126975  152809 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0908 14:02:01.127035  152809 ssh_runner.go:195] Run: containerd --version
	I0908 14:02:01.150026  152809 ssh_runner.go:195] Run: containerd --version
	I0908 14:02:01.178469  152809 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0908 14:02:01.181728  152809 cli_runner.go:164] Run: docker network inspect scheduled-stop-160137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:02:01.199632  152809 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 14:02:01.203491  152809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:02:01.215401  152809 kubeadm.go:875] updating cluster {Name:scheduled-stop-160137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-160137 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:02:01.215506  152809 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0908 14:02:01.215569  152809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:02:01.252572  152809 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 14:02:01.252585  152809 containerd.go:534] Images already preloaded, skipping extraction
	I0908 14:02:01.252646  152809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 14:02:01.293629  152809 containerd.go:627] all images are preloaded for containerd runtime.
	I0908 14:02:01.293643  152809 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:02:01.293649  152809 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0908 14:02:01.293756  152809 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-160137 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-160137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:02:01.293823  152809 ssh_runner.go:195] Run: sudo crictl info
	I0908 14:02:01.334039  152809 cni.go:84] Creating CNI manager for ""
	I0908 14:02:01.334050  152809 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 14:02:01.334059  152809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:02:01.334079  152809 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-160137 NodeName:scheduled-stop-160137 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:02:01.334190  152809 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "scheduled-stop-160137"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:02:01.334259  152809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 14:02:01.343572  152809 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:02:01.343636  152809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:02:01.352634  152809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0908 14:02:01.372239  152809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:02:01.392588  152809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0908 14:02:01.411623  152809 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 14:02:01.415282  152809 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:02:01.426242  152809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:02:01.511401  152809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:02:01.528198  152809 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137 for IP: 192.168.76.2
	I0908 14:02:01.528210  152809 certs.go:194] generating shared ca certs ...
	I0908 14:02:01.528227  152809 certs.go:226] acquiring lock for ca certs: {Name:mke132b78a39150f004355d03d18e99cfccd0efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:01.528441  152809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-2314/.minikube/ca.key
	I0908 14:02:01.528493  152809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-2314/.minikube/proxy-client-ca.key
	I0908 14:02:01.528499  152809 certs.go:256] generating profile certs ...
	I0908 14:02:01.528557  152809 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/client.key
	I0908 14:02:01.528575  152809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/client.crt with IP's: []
	I0908 14:02:02.235155  152809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/client.crt ...
	I0908 14:02:02.235173  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/client.crt: {Name:mk0a51fbdf29af1b9109cef68eb177e7cf646d1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:02.235376  152809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/client.key ...
	I0908 14:02:02.235387  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/client.key: {Name:mk6deb45f52ffce9f428d050b8baf515a2c1b72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:02.235512  152809 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.key.4bce23d1
	I0908 14:02:02.235532  152809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.crt.4bce23d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0908 14:02:02.497889  152809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.crt.4bce23d1 ...
	I0908 14:02:02.497904  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.crt.4bce23d1: {Name:mkcc534aac4e22dbb4ec58dd3f28fee22ff23bba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:02.498098  152809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.key.4bce23d1 ...
	I0908 14:02:02.498106  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.key.4bce23d1: {Name:mk21f1f11ad586ebd907ae825687e3a88073eea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:02.498192  152809 certs.go:381] copying /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.crt.4bce23d1 -> /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.crt
	I0908 14:02:02.498267  152809 certs.go:385] copying /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.key.4bce23d1 -> /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.key
	I0908 14:02:02.498317  152809 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.key
	I0908 14:02:02.498330  152809 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.crt with IP's: []
	I0908 14:02:03.184389  152809 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.crt ...
	I0908 14:02:03.184406  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.crt: {Name:mk46af2331ff689498db959d70bb92c81e3d3490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:03.184602  152809 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.key ...
	I0908 14:02:03.184610  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.key: {Name:mk6481bd02fd2e2c2b2a8bde8662d2fd4a1a4d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:03.184796  152809 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/4118.pem (1338 bytes)
	W0908 14:02:03.184834  152809 certs.go:480] ignoring /home/jenkins/minikube-integration/21504-2314/.minikube/certs/4118_empty.pem, impossibly tiny 0 bytes
	I0908 14:02:03.184849  152809 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 14:02:03.184876  152809 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/ca.pem (1078 bytes)
	I0908 14:02:03.184898  152809 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:02:03.184918  152809 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2314/.minikube/certs/key.pem (1679 bytes)
	I0908 14:02:03.184962  152809 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2314/.minikube/files/etc/ssl/certs/41182.pem (1708 bytes)
	I0908 14:02:03.185655  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:02:03.210681  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:02:03.235652  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:02:03.259667  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 14:02:03.284544  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0908 14:02:03.310655  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 14:02:03.335879  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:02:03.360560  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/scheduled-stop-160137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 14:02:03.384823  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/files/etc/ssl/certs/41182.pem --> /usr/share/ca-certificates/41182.pem (1708 bytes)
	I0908 14:02:03.409523  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:02:03.434381  152809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2314/.minikube/certs/4118.pem --> /usr/share/ca-certificates/4118.pem (1338 bytes)
	I0908 14:02:03.459798  152809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:02:03.479117  152809 ssh_runner.go:195] Run: openssl version
	I0908 14:02:03.484797  152809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41182.pem && ln -fs /usr/share/ca-certificates/41182.pem /etc/ssl/certs/41182.pem"
	I0908 14:02:03.494906  152809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41182.pem
	I0908 14:02:03.498626  152809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:35 /usr/share/ca-certificates/41182.pem
	I0908 14:02:03.498693  152809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41182.pem
	I0908 14:02:03.505782  152809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41182.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:02:03.515615  152809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:02:03.525712  152809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:02:03.529412  152809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:27 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:02:03.529474  152809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:02:03.536718  152809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:02:03.546369  152809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4118.pem && ln -fs /usr/share/ca-certificates/4118.pem /etc/ssl/certs/4118.pem"
	I0908 14:02:03.555983  152809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4118.pem
	I0908 14:02:03.559540  152809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:35 /usr/share/ca-certificates/4118.pem
	I0908 14:02:03.559613  152809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4118.pem
	I0908 14:02:03.566758  152809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4118.pem /etc/ssl/certs/51391683.0"
	I0908 14:02:03.577904  152809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:02:03.583582  152809 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 14:02:03.583633  152809 kubeadm.go:392] StartCluster: {Name:scheduled-stop-160137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-160137 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:02:03.583701  152809 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0908 14:02:03.583764  152809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 14:02:03.622126  152809 cri.go:89] found id: ""
	I0908 14:02:03.622208  152809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 14:02:03.631707  152809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 14:02:03.641112  152809 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 14:02:03.641167  152809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 14:02:03.650640  152809 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 14:02:03.650655  152809 kubeadm.go:157] found existing configuration files:
	
	I0908 14:02:03.650706  152809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 14:02:03.659535  152809 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 14:02:03.659591  152809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 14:02:03.668214  152809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 14:02:03.676886  152809 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 14:02:03.676954  152809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 14:02:03.685758  152809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 14:02:03.694390  152809 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 14:02:03.694493  152809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 14:02:03.703179  152809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 14:02:03.712270  152809 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 14:02:03.712326  152809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 14:02:03.721503  152809 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 14:02:03.765201  152809 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 14:02:03.765273  152809 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 14:02:03.782399  152809 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 14:02:03.782462  152809 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 14:02:03.782497  152809 kubeadm.go:310] OS: Linux
	I0908 14:02:03.782549  152809 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 14:02:03.782597  152809 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 14:02:03.782644  152809 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 14:02:03.782699  152809 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 14:02:03.782750  152809 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 14:02:03.782797  152809 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 14:02:03.782841  152809 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 14:02:03.782889  152809 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 14:02:03.782935  152809 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 14:02:03.850445  152809 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 14:02:03.850550  152809 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 14:02:03.850642  152809 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 14:02:03.860818  152809 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 14:02:03.867275  152809 out.go:252]   - Generating certificates and keys ...
	I0908 14:02:03.867359  152809 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 14:02:03.867426  152809 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 14:02:04.709086  152809 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 14:02:05.361276  152809 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 14:02:06.248429  152809 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 14:02:06.459735  152809 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 14:02:06.647889  152809 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 14:02:06.648233  152809 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-160137] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 14:02:06.896236  152809 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 14:02:06.896833  152809 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-160137] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 14:02:07.559094  152809 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 14:02:07.909212  152809 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 14:02:08.439521  152809 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 14:02:08.439746  152809 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 14:02:08.977135  152809 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 14:02:09.688590  152809 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 14:02:09.858486  152809 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 14:02:10.000264  152809 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 14:02:10.484386  152809 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 14:02:10.485253  152809 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 14:02:10.487959  152809 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 14:02:10.493674  152809 out.go:252]   - Booting up control plane ...
	I0908 14:02:10.493810  152809 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 14:02:10.493913  152809 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 14:02:10.493991  152809 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 14:02:10.504088  152809 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 14:02:10.504350  152809 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 14:02:10.511553  152809 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 14:02:10.512691  152809 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 14:02:10.512738  152809 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 14:02:10.614363  152809 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 14:02:10.614477  152809 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 14:02:11.625423  152809 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.011172178s
	I0908 14:02:11.629153  152809 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 14:02:11.629356  152809 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0908 14:02:11.629558  152809 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 14:02:11.629639  152809 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 14:02:14.469282  152809 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.839568769s
	I0908 14:02:15.730613  152809 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.101379283s
	I0908 14:02:17.630750  152809 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.00148528s
	I0908 14:02:17.652619  152809 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 14:02:17.677324  152809 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 14:02:17.697028  152809 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 14:02:17.697227  152809 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-160137 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 14:02:17.709493  152809 kubeadm.go:310] [bootstrap-token] Using token: 7ew1dq.j6g33mwbz8hy5iji
	I0908 14:02:17.712380  152809 out.go:252]   - Configuring RBAC rules ...
	I0908 14:02:17.712501  152809 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 14:02:17.720563  152809 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 14:02:17.734245  152809 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 14:02:17.741701  152809 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 14:02:17.747168  152809 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 14:02:17.754929  152809 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 14:02:18.038332  152809 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 14:02:18.475839  152809 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 14:02:19.038533  152809 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 14:02:19.039499  152809 kubeadm.go:310] 
	I0908 14:02:19.039566  152809 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 14:02:19.039570  152809 kubeadm.go:310] 
	I0908 14:02:19.039647  152809 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 14:02:19.039651  152809 kubeadm.go:310] 
	I0908 14:02:19.039677  152809 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 14:02:19.039736  152809 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 14:02:19.039825  152809 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 14:02:19.039835  152809 kubeadm.go:310] 
	I0908 14:02:19.039890  152809 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 14:02:19.039893  152809 kubeadm.go:310] 
	I0908 14:02:19.039946  152809 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 14:02:19.039958  152809 kubeadm.go:310] 
	I0908 14:02:19.040011  152809 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 14:02:19.040091  152809 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 14:02:19.040170  152809 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 14:02:19.040174  152809 kubeadm.go:310] 
	I0908 14:02:19.040267  152809 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 14:02:19.040346  152809 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 14:02:19.040349  152809 kubeadm.go:310] 
	I0908 14:02:19.040462  152809 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7ew1dq.j6g33mwbz8hy5iji \
	I0908 14:02:19.040566  152809 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95656961684b4abed2a441b60d9e5046bee63a59d43389631e28f6a3337554cd \
	I0908 14:02:19.040587  152809 kubeadm.go:310] 	--control-plane 
	I0908 14:02:19.040590  152809 kubeadm.go:310] 
	I0908 14:02:19.040675  152809 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 14:02:19.040679  152809 kubeadm.go:310] 
	I0908 14:02:19.040760  152809 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7ew1dq.j6g33mwbz8hy5iji \
	I0908 14:02:19.040863  152809 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95656961684b4abed2a441b60d9e5046bee63a59d43389631e28f6a3337554cd 
	I0908 14:02:19.044704  152809 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 14:02:19.044925  152809 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 14:02:19.045044  152809 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 14:02:19.045060  152809 cni.go:84] Creating CNI manager for ""
	I0908 14:02:19.045066  152809 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 14:02:19.048112  152809 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 14:02:19.050916  152809 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 14:02:19.054547  152809 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 14:02:19.054556  152809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 14:02:19.074663  152809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 14:02:19.363729  152809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 14:02:19.363894  152809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:02:19.363978  152809 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-160137 minikube.k8s.io/updated_at=2025_09_08T14_02_19_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6 minikube.k8s.io/name=scheduled-stop-160137 minikube.k8s.io/primary=true
	I0908 14:02:19.493330  152809 ops.go:34] apiserver oom_adj: -16
	I0908 14:02:19.493349  152809 kubeadm.go:1105] duration metric: took 129.539712ms to wait for elevateKubeSystemPrivileges
	I0908 14:02:19.493360  152809 kubeadm.go:394] duration metric: took 15.909730621s to StartCluster
	I0908 14:02:19.493374  152809 settings.go:142] acquiring lock: {Name:mk4f8717708db28eef58408fb347a7d2170243fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:19.493437  152809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-2314/kubeconfig
	I0908 14:02:19.494084  152809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/kubeconfig: {Name:mk59ae76c24dca3eb03e6fa665ed1169acb8310d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:02:19.494285  152809 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0908 14:02:19.494398  152809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 14:02:19.494624  152809 config.go:182] Loaded profile config "scheduled-stop-160137": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:02:19.494690  152809 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 14:02:19.494750  152809 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-160137"
	I0908 14:02:19.494763  152809 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-160137"
	I0908 14:02:19.494783  152809 host.go:66] Checking if "scheduled-stop-160137" exists ...
	I0908 14:02:19.495279  152809 cli_runner.go:164] Run: docker container inspect scheduled-stop-160137 --format={{.State.Status}}
	I0908 14:02:19.495691  152809 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-160137"
	I0908 14:02:19.495708  152809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-160137"
	I0908 14:02:19.495997  152809 cli_runner.go:164] Run: docker container inspect scheduled-stop-160137 --format={{.State.Status}}
	I0908 14:02:19.498471  152809 out.go:179] * Verifying Kubernetes components...
	I0908 14:02:19.501470  152809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:02:19.537028  152809 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:02:19.539167  152809 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-160137"
	I0908 14:02:19.539194  152809 host.go:66] Checking if "scheduled-stop-160137" exists ...
	I0908 14:02:19.539606  152809 cli_runner.go:164] Run: docker container inspect scheduled-stop-160137 --format={{.State.Status}}
	I0908 14:02:19.541693  152809 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:02:19.541704  152809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 14:02:19.541761  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:02:19.584513  152809 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 14:02:19.584525  152809 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 14:02:19.584616  152809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-160137
	I0908 14:02:19.592348  152809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa Username:docker}
	I0908 14:02:19.621704  152809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/scheduled-stop-160137/id_rsa Username:docker}
	I0908 14:02:19.801557  152809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 14:02:19.801663  152809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:02:19.805336  152809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 14:02:19.827307  152809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:02:20.281568  152809 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:02:20.281637  152809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:02:20.281722  152809 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0908 14:02:20.393401  152809 api_server.go:72] duration metric: took 899.090682ms to wait for apiserver process to appear ...
	I0908 14:02:20.393412  152809 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:02:20.393429  152809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 14:02:20.396220  152809 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0908 14:02:20.399909  152809 addons.go:514] duration metric: took 905.224599ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0908 14:02:20.404998  152809 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 14:02:20.406071  152809 api_server.go:141] control plane version: v1.34.0
	I0908 14:02:20.406093  152809 api_server.go:131] duration metric: took 12.675083ms to wait for apiserver health ...
	I0908 14:02:20.406101  152809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:02:20.408914  152809 system_pods.go:59] 5 kube-system pods found
	I0908 14:02:20.408938  152809 system_pods.go:61] "etcd-scheduled-stop-160137" [d0a800b1-1612-4cf1-81dd-6971c9de4c21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:02:20.408945  152809 system_pods.go:61] "kube-apiserver-scheduled-stop-160137" [96835325-3a90-4104-a50d-68d34a53e068] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:02:20.408956  152809 system_pods.go:61] "kube-controller-manager-scheduled-stop-160137" [38d52c79-bf9e-4186-a82f-228c055b5847] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:02:20.408963  152809 system_pods.go:61] "kube-scheduler-scheduled-stop-160137" [fbcd5c37-9cb8-44ba-a744-96af01623051] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:02:20.408967  152809 system_pods.go:61] "storage-provisioner" [d8ec5534-5260-45ae-a980-e0c94bbb3244] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0908 14:02:20.408971  152809 system_pods.go:74] duration metric: took 2.866845ms to wait for pod list to return data ...
	I0908 14:02:20.408980  152809 kubeadm.go:578] duration metric: took 914.677633ms to wait for: map[apiserver:true system_pods:true]
	I0908 14:02:20.408993  152809 node_conditions.go:102] verifying NodePressure condition ...
	I0908 14:02:20.411537  152809 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 14:02:20.411554  152809 node_conditions.go:123] node cpu capacity is 2
	I0908 14:02:20.411565  152809 node_conditions.go:105] duration metric: took 2.568479ms to run NodePressure ...
	I0908 14:02:20.411577  152809 start.go:241] waiting for startup goroutines ...
	I0908 14:02:20.786416  152809 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-160137" context rescaled to 1 replicas
	I0908 14:02:20.786438  152809 start.go:246] waiting for cluster config update ...
	I0908 14:02:20.786447  152809 start.go:255] writing updated cluster config ...
	I0908 14:02:20.786752  152809 ssh_runner.go:195] Run: rm -f paused
	I0908 14:02:20.846924  152809 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 14:02:20.850235  152809 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-160137" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0184e8be01d87       996be7e86d9b3       10 seconds ago      Running             kube-controller-manager   0                   1f6d67a0cb9e5       kube-controller-manager-scheduled-stop-160137
	04b7c58417b31       a1894772a478e       10 seconds ago      Running             etcd                      0                   b84f4cd2b52fa       etcd-scheduled-stop-160137
	c9982c44bd861       d291939e99406       10 seconds ago      Running             kube-apiserver            0                   37e87d323e020       kube-apiserver-scheduled-stop-160137
	0a8bfcfa70eed       a25f5ef9c34c3       10 seconds ago      Running             kube-scheduler            0                   0d49b4ce5b00c       kube-scheduler-scheduled-stop-160137
	
	
	==> containerd <==
	Sep 08 14:02:01 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:01.075735295Z" level=info msg="containerd successfully booted in 0.088388s"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.033058938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-160137,Uid:ffbb50371569b5ad6ebfc4161ae551a5,Namespace:kube-system,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.039575792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-160137,Uid:c7181bb829f59210ffc51b2fd67e0217,Namespace:kube-system,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.043664838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-160137,Uid:c457d52c24dbccf78b0af8f0ab04285c,Namespace:kube-system,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.051832959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-160137,Uid:e23c7c4cd1f10c12a726a5f3f5563375,Namespace:kube-system,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.150488260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-160137,Uid:ffbb50371569b5ad6ebfc4161ae551a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d49b4ce5b00c28e8becba27d856dc0dd8ef0265440a7e9d64d9b57a71bce775\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.159595190Z" level=info msg="CreateContainer within sandbox \"0d49b4ce5b00c28e8becba27d856dc0dd8ef0265440a7e9d64d9b57a71bce775\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.185289295Z" level=info msg="CreateContainer within sandbox \"0d49b4ce5b00c28e8becba27d856dc0dd8ef0265440a7e9d64d9b57a71bce775\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a8bfcfa70eed98313b87318bdfa580f89a9d4908479637f3827d093cf7a1133\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.186075421Z" level=info msg="StartContainer for \"0a8bfcfa70eed98313b87318bdfa580f89a9d4908479637f3827d093cf7a1133\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.272685676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-160137,Uid:c457d52c24dbccf78b0af8f0ab04285c,Namespace:kube-system,Attempt:0,} returns sandbox id \"37e87d323e020414e1ef262b75ea29a0b15e5ee553405b8a03222719bceff241\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.291412665Z" level=info msg="CreateContainer within sandbox \"37e87d323e020414e1ef262b75ea29a0b15e5ee553405b8a03222719bceff241\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.323552444Z" level=info msg="StartContainer for \"0a8bfcfa70eed98313b87318bdfa580f89a9d4908479637f3827d093cf7a1133\" returns successfully"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.326078946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-160137,Uid:c7181bb829f59210ffc51b2fd67e0217,Namespace:kube-system,Attempt:0,} returns sandbox id \"b84f4cd2b52fae060d4384fa3464138b6bb950027315245b72f76ce402e95f27\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.326778719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-160137,Uid:e23c7c4cd1f10c12a726a5f3f5563375,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f6d67a0cb9e5a777b653db7280331390659ed4c26433a08f26f19849d916c4e\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.334547879Z" level=info msg="CreateContainer within sandbox \"b84f4cd2b52fae060d4384fa3464138b6bb950027315245b72f76ce402e95f27\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.337496571Z" level=info msg="CreateContainer within sandbox \"1f6d67a0cb9e5a777b653db7280331390659ed4c26433a08f26f19849d916c4e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.346473547Z" level=info msg="CreateContainer within sandbox \"37e87d323e020414e1ef262b75ea29a0b15e5ee553405b8a03222719bceff241\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9982c44bd861361732aa2f07498f8ab2a24919eabe2bda48a6e2eb4f786ce56\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.347333995Z" level=info msg="StartContainer for \"c9982c44bd861361732aa2f07498f8ab2a24919eabe2bda48a6e2eb4f786ce56\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.394748698Z" level=info msg="CreateContainer within sandbox \"b84f4cd2b52fae060d4384fa3464138b6bb950027315245b72f76ce402e95f27\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"04b7c58417b31f7905ee0228718c91461c6989db69e2870831f4c3788b704541\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.395530507Z" level=info msg="StartContainer for \"04b7c58417b31f7905ee0228718c91461c6989db69e2870831f4c3788b704541\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.412649413Z" level=info msg="CreateContainer within sandbox \"1f6d67a0cb9e5a777b653db7280331390659ed4c26433a08f26f19849d916c4e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0184e8be01d87b1545a60040e0a2b8d62e841025be306a2f411af3c0562e4b7a\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.413340711Z" level=info msg="StartContainer for \"0184e8be01d87b1545a60040e0a2b8d62e841025be306a2f411af3c0562e4b7a\""
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.474282486Z" level=info msg="StartContainer for \"c9982c44bd861361732aa2f07498f8ab2a24919eabe2bda48a6e2eb4f786ce56\" returns successfully"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.557963421Z" level=info msg="StartContainer for \"04b7c58417b31f7905ee0228718c91461c6989db69e2870831f4c3788b704541\" returns successfully"
	Sep 08 14:02:12 scheduled-stop-160137 containerd[837]: time="2025-09-08T14:02:12.630706817Z" level=info msg="StartContainer for \"0184e8be01d87b1545a60040e0a2b8d62e841025be306a2f411af3c0562e4b7a\" returns successfully"
	
	
	==> describe nodes <==
	Name:               scheduled-stop-160137
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-160137
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=scheduled-stop-160137
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T14_02_19_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 14:02:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-160137
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 14:02:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 14:02:18 +0000   Mon, 08 Sep 2025 14:02:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 14:02:18 +0000   Mon, 08 Sep 2025 14:02:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 14:02:18 +0000   Mon, 08 Sep 2025 14:02:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 14:02:18 +0000   Mon, 08 Sep 2025 14:02:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-160137
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f20ba50c6bf4a5b8283cbb26fa967ed
	  System UUID:                82836647-7b69-47b6-bf86-0a58c58dc3ed
	  Boot ID:                    e9996d3c-7ca0-44f4-a0bc-36bb577e6736
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	Non-terminated Pods:          (5 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-160137                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-160137             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-controller-manager-scheduled-stop-160137    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-160137             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 storage-provisioner                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 4s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 4s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4s    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s    kubelet          Node scheduled-stop-160137 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s    kubelet          Node scheduled-stop-160137 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s    kubelet          Node scheduled-stop-160137 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           0s    node-controller  Node scheduled-stop-160137 event: Registered Node scheduled-stop-160137 in Controller
	
	
	==> dmesg <==
	[Sep 8 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014416] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.488283] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036945] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.751194] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.289622] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 8 13:57] hrtimer: interrupt took 7896607 ns
	
	
	==> etcd [04b7c58417b31f7905ee0228718c91461c6989db69e2870831f4c3788b704541] <==
	{"level":"warn","ts":"2025-09-08T14:02:14.380445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.420447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.456013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.472432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.489887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.505752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.523176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.541706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.559709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.577361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.595427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.623349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.637932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.655829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.676531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.693175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.712922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.725015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.741134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.752728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.774022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.795903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.812859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.829346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:02:14.908455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55112","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:02:22 up 44 min,  0 users,  load average: 1.84, 1.92, 2.06
	Linux scheduled-stop-160137 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c9982c44bd861361732aa2f07498f8ab2a24919eabe2bda48a6e2eb4f786ce56] <==
	I0908 14:02:15.745622       1 cache.go:39] Caches are synced for autoregister controller
	I0908 14:02:15.751767       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 14:02:15.756895       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I0908 14:02:15.760822       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0908 14:02:15.760881       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0908 14:02:15.761125       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0908 14:02:15.761332       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0908 14:02:15.761349       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0908 14:02:15.761862       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0908 14:02:15.776663       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 14:02:15.792550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 14:02:15.795361       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0908 14:02:16.438520       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0908 14:02:16.443635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0908 14:02:16.443658       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 14:02:17.252056       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 14:02:17.310479       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 14:02:17.447930       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0908 14:02:17.460693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0908 14:02:17.461938       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 14:02:17.470565       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 14:02:17.623234       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0908 14:02:18.449816       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 14:02:18.473454       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 14:02:18.486933       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0184e8be01d87b1545a60040e0a2b8d62e841025be306a2f411af3c0562e4b7a] <==
	I0908 14:02:22.644536       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 14:02:22.660845       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 14:02:22.666102       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 14:02:22.669656       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 14:02:22.669861       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 14:02:22.669882       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 14:02:22.669909       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 14:02:22.669926       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 14:02:22.669962       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 14:02:22.669980       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 14:02:22.669998       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 14:02:22.671006       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 14:02:22.671035       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 14:02:22.671159       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 14:02:22.671610       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-160137" podCIDRs=["10.244.0.0/24"]
	I0908 14:02:22.673300       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 14:02:22.679579       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 14:02:22.679726       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 14:02:22.685521       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 14:02:22.695936       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 14:02:22.697133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 14:02:22.698380       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 14:02:22.715950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 14:02:22.715978       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 14:02:22.715985       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-scheduler [0a8bfcfa70eed98313b87318bdfa580f89a9d4908479637f3827d093cf7a1133] <==
	E0908 14:02:15.736584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 14:02:15.736758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 14:02:15.736949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 14:02:15.737015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 14:02:15.737076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 14:02:15.737138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 14:02:15.737195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 14:02:15.737244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 14:02:15.737313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 14:02:15.737369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 14:02:15.740278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 14:02:15.740431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 14:02:15.740498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 14:02:15.740546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 14:02:15.740591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 14:02:16.646305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 14:02:16.755286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 14:02:16.764003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 14:02:16.789242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 14:02:16.808941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 14:02:16.853329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 14:02:16.858543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 14:02:16.904573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 14:02:16.999892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0908 14:02:18.618667       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797757    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ffbb50371569b5ad6ebfc4161ae551a5-kubeconfig\") pod \"kube-scheduler-scheduled-stop-160137\" (UID: \"ffbb50371569b5ad6ebfc4161ae551a5\") " pod="kube-system/kube-scheduler-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797781    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c7181bb829f59210ffc51b2fd67e0217-etcd-certs\") pod \"etcd-scheduled-stop-160137\" (UID: \"c7181bb829f59210ffc51b2fd67e0217\") " pod="kube-system/etcd-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797801    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c457d52c24dbccf78b0af8f0ab04285c-ca-certs\") pod \"kube-apiserver-scheduled-stop-160137\" (UID: \"c457d52c24dbccf78b0af8f0ab04285c\") " pod="kube-system/kube-apiserver-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797819    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e23c7c4cd1f10c12a726a5f3f5563375-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-160137\" (UID: \"e23c7c4cd1f10c12a726a5f3f5563375\") " pod="kube-system/kube-controller-manager-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797839    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c7181bb829f59210ffc51b2fd67e0217-etcd-data\") pod \"etcd-scheduled-stop-160137\" (UID: \"c7181bb829f59210ffc51b2fd67e0217\") " pod="kube-system/etcd-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797860    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c457d52c24dbccf78b0af8f0ab04285c-k8s-certs\") pod \"kube-apiserver-scheduled-stop-160137\" (UID: \"c457d52c24dbccf78b0af8f0ab04285c\") " pod="kube-system/kube-apiserver-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797876    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e23c7c4cd1f10c12a726a5f3f5563375-ca-certs\") pod \"kube-controller-manager-scheduled-stop-160137\" (UID: \"e23c7c4cd1f10c12a726a5f3f5563375\") " pod="kube-system/kube-controller-manager-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797895    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e23c7c4cd1f10c12a726a5f3f5563375-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-160137\" (UID: \"e23c7c4cd1f10c12a726a5f3f5563375\") " pod="kube-system/kube-controller-manager-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797913    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c457d52c24dbccf78b0af8f0ab04285c-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-160137\" (UID: \"c457d52c24dbccf78b0af8f0ab04285c\") " pod="kube-system/kube-apiserver-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797936    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c457d52c24dbccf78b0af8f0ab04285c-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-160137\" (UID: \"c457d52c24dbccf78b0af8f0ab04285c\") " pod="kube-system/kube-apiserver-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797952    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e23c7c4cd1f10c12a726a5f3f5563375-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-160137\" (UID: \"e23c7c4cd1f10c12a726a5f3f5563375\") " pod="kube-system/kube-controller-manager-scheduled-stop-160137"
	Sep 08 14:02:18 scheduled-stop-160137 kubelet[1549]: I0908 14:02:18.797973    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e23c7c4cd1f10c12a726a5f3f5563375-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-160137\" (UID: \"e23c7c4cd1f10c12a726a5f3f5563375\") " pod="kube-system/kube-controller-manager-scheduled-stop-160137"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: I0908 14:02:19.341939    1549 apiserver.go:52] "Watching apiserver"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: I0908 14:02:19.395395    1549 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: I0908 14:02:19.514230    1549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-160137"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: E0908 14:02:19.531723    1549 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-160137\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-160137"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: I0908 14:02:19.609320    1549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-160137" podStartSLOduration=1.60929785 podStartE2EDuration="1.60929785s" podCreationTimestamp="2025-09-08 14:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:02:19.553276844 +0000 UTC m=+1.300418480" watchObservedRunningTime="2025-09-08 14:02:19.60929785 +0000 UTC m=+1.356439486"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: I0908 14:02:19.645284    1549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-160137" podStartSLOduration=1.6452655919999999 podStartE2EDuration="1.645265592s" podCreationTimestamp="2025-09-08 14:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:02:19.645060971 +0000 UTC m=+1.392202606" watchObservedRunningTime="2025-09-08 14:02:19.645265592 +0000 UTC m=+1.392407228"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: I0908 14:02:19.645464    1549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-160137" podStartSLOduration=2.645458981 podStartE2EDuration="2.645458981s" podCreationTimestamp="2025-09-08 14:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:02:19.609643561 +0000 UTC m=+1.356785205" watchObservedRunningTime="2025-09-08 14:02:19.645458981 +0000 UTC m=+1.392600617"
	Sep 08 14:02:19 scheduled-stop-160137 kubelet[1549]: I0908 14:02:19.690632    1549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-160137" podStartSLOduration=1.6906128169999999 podStartE2EDuration="1.690612817s" podCreationTimestamp="2025-09-08 14:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:02:19.667190015 +0000 UTC m=+1.414331716" watchObservedRunningTime="2025-09-08 14:02:19.690612817 +0000 UTC m=+1.437754461"
	Sep 08 14:02:22 scheduled-stop-160137 kubelet[1549]: I0908 14:02:22.750924    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d8ec5534-5260-45ae-a980-e0c94bbb3244-tmp\") pod \"storage-provisioner\" (UID: \"d8ec5534-5260-45ae-a980-e0c94bbb3244\") " pod="kube-system/storage-provisioner"
	Sep 08 14:02:22 scheduled-stop-160137 kubelet[1549]: I0908 14:02:22.750993    1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgzfq\" (UniqueName: \"kubernetes.io/projected/d8ec5534-5260-45ae-a980-e0c94bbb3244-kube-api-access-kgzfq\") pod \"storage-provisioner\" (UID: \"d8ec5534-5260-45ae-a980-e0c94bbb3244\") " pod="kube-system/storage-provisioner"
	Sep 08 14:02:22 scheduled-stop-160137 kubelet[1549]: E0908 14:02:22.864447    1549 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 08 14:02:22 scheduled-stop-160137 kubelet[1549]: E0908 14:02:22.864490    1549 projected.go:196] Error preparing data for projected volume kube-api-access-kgzfq for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 08 14:02:22 scheduled-stop-160137 kubelet[1549]: E0908 14:02:22.867752    1549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8ec5534-5260-45ae-a980-e0c94bbb3244-kube-api-access-kgzfq podName:d8ec5534-5260-45ae-a980-e0c94bbb3244 nodeName:}" failed. No retries permitted until 2025-09-08 14:02:23.36740589 +0000 UTC m=+5.114547526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kgzfq" (UniqueName: "kubernetes.io/projected/d8ec5534-5260-45ae-a980-e0c94bbb3244-kube-api-access-kgzfq") pod "storage-provisioner" (UID: "d8ec5534-5260-45ae-a980-e0c94bbb3244") : configmap "kube-root-ca.crt" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-160137 -n scheduled-stop-160137
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-160137 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-160137 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-160137 describe pod storage-provisioner: exit status 1 (132.241572ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-160137 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-160137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-160137
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-160137: (2.009927527s)
--- FAIL: TestScheduledStopUnix (33.66s)

                                                
                                    

Test pass (301/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.97
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 5.05
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.23
18 TestDownloadOnly/v1.34.0/DeleteAll 0.33
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 210.2
29 TestAddons/serial/Volcano 40.29
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.08
35 TestAddons/parallel/Registry 15.26
36 TestAddons/parallel/RegistryCreds 0.73
37 TestAddons/parallel/Ingress 21.06
38 TestAddons/parallel/InspektorGadget 6.33
39 TestAddons/parallel/MetricsServer 5.88
41 TestAddons/parallel/CSI 48.98
42 TestAddons/parallel/Headlamp 24.42
43 TestAddons/parallel/CloudSpanner 6.8
44 TestAddons/parallel/LocalPath 52.53
45 TestAddons/parallel/NvidiaDevicePlugin 5.66
46 TestAddons/parallel/Yakd 11.86
48 TestAddons/StoppedEnableDisable 12.26
49 TestCertOptions 38.87
50 TestCertExpiration 231.94
52 TestForceSystemdFlag 42.44
53 TestForceSystemdEnv 35
54 TestDockerEnvContainerd 47.72
59 TestErrorSpam/setup 31.73
60 TestErrorSpam/start 0.83
61 TestErrorSpam/status 1.2
62 TestErrorSpam/pause 1.75
63 TestErrorSpam/unpause 1.78
64 TestErrorSpam/stop 1.43
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 91.18
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.79
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.68
76 TestFunctional/serial/CacheCmd/cache/add_local 1.31
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 42.53
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.68
87 TestFunctional/serial/LogsFileCmd 1.86
88 TestFunctional/serial/InvalidService 4.72
90 TestFunctional/parallel/ConfigCmd 0.51
91 TestFunctional/parallel/DashboardCmd 8.17
92 TestFunctional/parallel/DryRun 0.6
93 TestFunctional/parallel/InternationalLanguage 0.2
94 TestFunctional/parallel/StatusCmd 1.08
98 TestFunctional/parallel/ServiceCmdConnect 8.64
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 33.12
102 TestFunctional/parallel/SSHCmd 0.69
103 TestFunctional/parallel/CpCmd 2.35
105 TestFunctional/parallel/FileSync 0.43
106 TestFunctional/parallel/CertSync 2.43
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
114 TestFunctional/parallel/License 0.3
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.42
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
127 TestFunctional/parallel/ServiceCmd/List 0.54
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
130 TestFunctional/parallel/ServiceCmd/Format 0.44
131 TestFunctional/parallel/ServiceCmd/URL 0.36
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
133 TestFunctional/parallel/ProfileCmd/profile_list 0.41
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
135 TestFunctional/parallel/MountCmd/any-port 8.16
136 TestFunctional/parallel/MountCmd/specific-port 2.16
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.33
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 1.48
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.31
145 TestFunctional/parallel/ImageCommands/Setup 0.61
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.59
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.5
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 130.61
164 TestMultiControlPlane/serial/DeployApp 41.56
165 TestMultiControlPlane/serial/PingHostFromPods 1.64
166 TestMultiControlPlane/serial/AddWorkerNode 17.75
167 TestMultiControlPlane/serial/NodeLabels 0.2
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
169 TestMultiControlPlane/serial/CopyFile 19.32
170 TestMultiControlPlane/serial/StopSecondaryNode 12.95
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
172 TestMultiControlPlane/serial/RestartSecondaryNode 11.96
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.72
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.29
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
177 TestMultiControlPlane/serial/StopCluster 36
178 TestMultiControlPlane/serial/RestartCluster 60.96
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
180 TestMultiControlPlane/serial/AddSecondaryNode 30.2
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 82.97
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.76
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.69
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.71
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 44.41
211 TestKicCustomNetwork/use_default_bridge_network 34.42
212 TestKicExistingNetwork 37.68
213 TestKicCustomSubnet 34.23
214 TestKicStaticIP 36.1
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.03
219 TestMountStart/serial/StartWithMountFirst 6.72
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 8.44
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.6
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.2
226 TestMountStart/serial/RestartStopped 7.7
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 94.75
231 TestMultiNode/serial/DeployApp2Nodes 17.96
232 TestMultiNode/serial/PingHostFrom2Pods 0.96
233 TestMultiNode/serial/AddNode 13.78
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.81
236 TestMultiNode/serial/CopyFile 10.02
237 TestMultiNode/serial/StopNode 2.54
238 TestMultiNode/serial/StartAfterStop 7.73
239 TestMultiNode/serial/RestartKeepsNodes 79.83
240 TestMultiNode/serial/DeleteNode 5.48
241 TestMultiNode/serial/StopMultiNode 23.96
242 TestMultiNode/serial/RestartMultiNode 50.04
243 TestMultiNode/serial/ValidateNameConflict 33.7
248 TestPreload 146.58
253 TestInsufficientStorage 9.94
254 TestRunningBinaryUpgrade 67.58
256 TestKubernetesUpgrade 206.98
257 TestMissingContainerUpgrade 133.89
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 39.31
261 TestNoKubernetes/serial/StartWithStopK8s 25.36
262 TestNoKubernetes/serial/Start 8.94
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
264 TestNoKubernetes/serial/ProfileList 0.66
265 TestNoKubernetes/serial/Stop 1.2
266 TestNoKubernetes/serial/StartNoArgs 6.02
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
268 TestStoppedBinaryUpgrade/Setup 1.73
269 TestStoppedBinaryUpgrade/Upgrade 74.73
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.7
279 TestPause/serial/Start 91.45
287 TestNetworkPlugins/group/false 3.76
291 TestPause/serial/SecondStartNoReconfiguration 8.03
292 TestPause/serial/Pause 0.9
293 TestPause/serial/VerifyStatus 0.43
294 TestPause/serial/Unpause 0.84
295 TestPause/serial/PauseAgain 1.38
296 TestPause/serial/DeletePaused 3.03
297 TestPause/serial/VerifyDeletedResources 0.5
299 TestStartStop/group/old-k8s-version/serial/FirstStart 61.26
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.31
302 TestStartStop/group/old-k8s-version/serial/Stop 11.98
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/old-k8s-version/serial/SecondStart 49.31
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
308 TestStartStop/group/old-k8s-version/serial/Pause 3.12
310 TestStartStop/group/no-preload/serial/FirstStart 80.08
312 TestStartStop/group/embed-certs/serial/FirstStart 103.29
313 TestStartStop/group/no-preload/serial/DeployApp 9.37
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
315 TestStartStop/group/no-preload/serial/Stop 12.07
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/no-preload/serial/SecondStart 54.35
318 TestStartStop/group/embed-certs/serial/DeployApp 10.42
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
320 TestStartStop/group/embed-certs/serial/Stop 12.1
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
323 TestStartStop/group/embed-certs/serial/SecondStart 54.56
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
326 TestStartStop/group/no-preload/serial/Pause 4.29
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.17
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.49
332 TestStartStop/group/embed-certs/serial/Pause 3.16
334 TestStartStop/group/newest-cni/serial/FirstStart 39
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.29
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
338 TestStartStop/group/newest-cni/serial/Stop 1.24
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
340 TestStartStop/group/newest-cni/serial/SecondStart 18.53
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.68
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.38
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
346 TestStartStop/group/newest-cni/serial/Pause 3.69
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 29.54
349 TestNetworkPlugins/group/auto/Start 65.88
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.72
354 TestNetworkPlugins/group/kindnet/Start 90.49
355 TestNetworkPlugins/group/auto/KubeletFlags 0.37
356 TestNetworkPlugins/group/auto/NetCatPod 9.37
357 TestNetworkPlugins/group/auto/DNS 0.24
358 TestNetworkPlugins/group/auto/Localhost 0.2
359 TestNetworkPlugins/group/auto/HairPin 0.25
360 TestNetworkPlugins/group/flannel/Start 82.64
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
363 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
364 TestNetworkPlugins/group/kindnet/DNS 0.25
365 TestNetworkPlugins/group/kindnet/Localhost 0.19
366 TestNetworkPlugins/group/kindnet/HairPin 0.16
367 TestNetworkPlugins/group/enable-default-cni/Start 86.58
368 TestNetworkPlugins/group/flannel/ControllerPod 6
369 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
370 TestNetworkPlugins/group/flannel/NetCatPod 10.33
371 TestNetworkPlugins/group/flannel/DNS 0.25
372 TestNetworkPlugins/group/flannel/Localhost 0.23
373 TestNetworkPlugins/group/flannel/HairPin 0.26
374 TestNetworkPlugins/group/custom-flannel/Start 54.65
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.4
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
380 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
381 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.38
382 TestNetworkPlugins/group/custom-flannel/DNS 0.29
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
385 TestNetworkPlugins/group/bridge/Start 75.94
386 TestNetworkPlugins/group/calico/Start 60.41
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
388 TestNetworkPlugins/group/bridge/NetCatPod 10.41
389 TestNetworkPlugins/group/calico/ControllerPod 6.01
390 TestNetworkPlugins/group/bridge/DNS 0.23
391 TestNetworkPlugins/group/bridge/Localhost 0.16
392 TestNetworkPlugins/group/bridge/HairPin 0.17
393 TestNetworkPlugins/group/calico/KubeletFlags 0.31
394 TestNetworkPlugins/group/calico/NetCatPod 9.29
395 TestNetworkPlugins/group/calico/DNS 0.25
396 TestNetworkPlugins/group/calico/Localhost 0.26
397 TestNetworkPlugins/group/calico/HairPin 0.28
x
+
TestDownloadOnly/v1.28.0/json-events (5.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-737790 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-737790 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.970325194s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 13:27:16.849821    4118 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0908 13:27:16.849907    4118 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-737790
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-737790: exit status 85 (82.855082ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-737790 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-737790 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:27:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:27:10.928029    4123 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:27:10.928160    4123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:10.928171    4123 out.go:374] Setting ErrFile to fd 2...
	I0908 13:27:10.928175    4123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:10.928484    4123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	W0908 13:27:10.928628    4123 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21504-2314/.minikube/config/config.json: open /home/jenkins/minikube-integration/21504-2314/.minikube/config/config.json: no such file or directory
	I0908 13:27:10.929037    4123 out.go:368] Setting JSON to true
	I0908 13:27:10.929805    4123 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":581,"bootTime":1757337450,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0908 13:27:10.929872    4123 start.go:140] virtualization:  
	I0908 13:27:10.933887    4123 out.go:99] [download-only-737790] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0908 13:27:10.934063    4123 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 13:27:10.934136    4123 notify.go:220] Checking for updates...
	I0908 13:27:10.937134    4123 out.go:171] MINIKUBE_LOCATION=21504
	I0908 13:27:10.940130    4123 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:27:10.943110    4123 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	I0908 13:27:10.946113    4123 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	I0908 13:27:10.948928    4123 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 13:27:10.954485    4123 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:27:10.954778    4123 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:27:10.989812    4123 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:27:10.989907    4123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:11.448032    4123 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 13:27:11.438254758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:11.448192    4123 docker.go:318] overlay module found
	I0908 13:27:11.451363    4123 out.go:99] Using the docker driver based on user configuration
	I0908 13:27:11.451405    4123 start.go:304] selected driver: docker
	I0908 13:27:11.451415    4123 start.go:918] validating driver "docker" against <nil>
	I0908 13:27:11.451518    4123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:11.513946    4123 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 13:27:11.503992882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:11.514097    4123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:27:11.514399    4123 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 13:27:11.514598    4123 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:27:11.517622    4123 out.go:171] Using Docker driver with root privileges
	I0908 13:27:11.520376    4123 cni.go:84] Creating CNI manager for ""
	I0908 13:27:11.520441    4123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0908 13:27:11.520455    4123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:27:11.520534    4123 start.go:348] cluster config:
	{Name:download-only-737790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-737790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:27:11.523457    4123 out.go:99] Starting "download-only-737790" primary control-plane node in "download-only-737790" cluster
	I0908 13:27:11.523479    4123 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0908 13:27:11.526264    4123 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:27:11.526291    4123 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 13:27:11.526445    4123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:27:11.542072    4123 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:27:11.542246    4123 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:27:11.542353    4123 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:27:11.590778    4123 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0908 13:27:11.590804    4123 cache.go:58] Caching tarball of preloaded images
	I0908 13:27:11.590947    4123 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 13:27:11.594268    4123 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 13:27:11.594287    4123 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	I0908 13:27:11.677005    4123 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0908 13:27:14.566193    4123 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	I0908 13:27:14.566353    4123 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	I0908 13:27:15.480303    4123 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I0908 13:27:15.480781    4123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/download-only-737790/config.json ...
	I0908 13:27:15.480836    4123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/download-only-737790/config.json: {Name:mk765797ec22e9098a10bbd7529eaf666383ec33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:27:15.481022    4123 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0908 13:27:15.481234    4123 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21504-2314/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-737790 host does not exist
	  To start a cluster, run: "minikube start -p download-only-737790"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-737790
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-907927 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-907927 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.046222187s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 13:27:22.329448    4118 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 13:27:22.329486    4118 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-2314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-907927
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-907927: exit status 85 (233.942281ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-737790 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-737790 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │ 08 Sep 25 13:27 UTC │
	│ delete  │ -p download-only-737790                                                                                                                                                               │ download-only-737790 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │ 08 Sep 25 13:27 UTC │
	│ start   │ -o=json --download-only -p download-only-907927 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-907927 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:27:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:27:17.324812    4319 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:27:17.324953    4319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:17.324966    4319 out.go:374] Setting ErrFile to fd 2...
	I0908 13:27:17.324984    4319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:17.325296    4319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 13:27:17.325776    4319 out.go:368] Setting JSON to true
	I0908 13:27:17.326536    4319 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":588,"bootTime":1757337450,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0908 13:27:17.326606    4319 start.go:140] virtualization:  
	I0908 13:27:17.330025    4319 out.go:99] [download-only-907927] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:27:17.330300    4319 notify.go:220] Checking for updates...
	I0908 13:27:17.333229    4319 out.go:171] MINIKUBE_LOCATION=21504
	I0908 13:27:17.336284    4319 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:27:17.339313    4319 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	I0908 13:27:17.342359    4319 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	I0908 13:27:17.345196    4319 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 13:27:17.350917    4319 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:27:17.351222    4319 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:27:17.382076    4319 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:27:17.382175    4319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:17.446964    4319 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 13:27:17.430064774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:17.447071    4319 docker.go:318] overlay module found
	I0908 13:27:17.450143    4319 out.go:99] Using the docker driver based on user configuration
	I0908 13:27:17.450179    4319 start.go:304] selected driver: docker
	I0908 13:27:17.450192    4319 start.go:918] validating driver "docker" against <nil>
	I0908 13:27:17.450309    4319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:17.502880    4319 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 13:27:17.494342504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:17.503044    4319 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:27:17.503365    4319 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 13:27:17.503522    4319 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:27:17.506641    4319 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-907927 host does not exist
	  To start a cluster, run: "minikube start -p download-only-907927"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-907927
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 13:27:24.238853    4118 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-441767 --alsologtostderr --binary-mirror http://127.0.0.1:41377 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-441767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-441767
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-073153
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-073153: exit status 85 (65.268303ms)

                                                
                                                
-- stdout --
	* Profile "addons-073153" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-073153"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-073153
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-073153: exit status 85 (81.755819ms)

                                                
                                                
-- stdout --
	* Profile "addons-073153" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-073153"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (210.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-073153 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-073153 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m30.197537076s)
--- PASS: TestAddons/Setup (210.20s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 55.188676ms
addons_test.go:884: volcano-controller stabilized in 55.370387ms
addons_test.go:876: volcano-admission stabilized in 55.402215ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-dsjhx" [b4651aef-9453-467a-b33a-49de512f68fd] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.008831716s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-pv4vs" [7b37c129-95ba-4375-8520-b51dae0535de] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.008523781s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-wg2hm" [86f3348b-7a32-49e1-b65c-76d3b7a9b664] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003920118s
addons_test.go:903: (dbg) Run:  kubectl --context addons-073153 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-073153 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-073153 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [ec432c97-71fe-4ae7-9398-75880835cc29] Pending
helpers_test.go:352: "test-job-nginx-0" [ec432c97-71fe-4ae7-9398-75880835cc29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [ec432c97-71fe-4ae7-9398-75880835cc29] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003864299s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable volcano --alsologtostderr -v=1: (11.636281364s)
--- PASS: TestAddons/serial/Volcano (40.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-073153 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-073153 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-073153 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-073153 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1f990c88-76f6-4ee0-8e38-4b0e8766b1ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1f990c88-76f6-4ee0-8e38-4b0e8766b1ec] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003274689s
addons_test.go:694: (dbg) Run:  kubectl --context addons-073153 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-073153 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-073153 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-073153 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.04909ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-9fwwm" [ac0abd64-6d1f-4be9-a1f0-ba2235cec03a] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003546237s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5c424" [013eaf2e-04a3-4699-b45e-9e42ed379764] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002893181s
addons_test.go:392: (dbg) Run:  kubectl --context addons-073153 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-073153 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-073153 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.210858337s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 ip
2025/09/08 13:32:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.26s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.688455ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-073153
addons_test.go:332: (dbg) Run:  kubectl --context addons-073153 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-073153 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-073153 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-073153 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0f85ca6e-3c29-44ed-a995-b1f3b5b4f018] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0f85ca6e-3c29-44ed-a995-b1f3b5b4f018] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004267212s
I0908 13:33:24.513621    4118 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-073153 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable ingress-dns --alsologtostderr -v=1: (1.885240886s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable ingress --alsologtostderr -v=1: (7.984318576s)
--- PASS: TestAddons/parallel/Ingress (21.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-d765w" [9211e6a3-2a2c-4f04-a285-dc830b1accd2] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004078378s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 34.177136ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-hp926" [eb118eb6-5b52-4cfa-a871-764a77a38d24] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008052611s
addons_test.go:463: (dbg) Run:  kubectl --context addons-073153 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 13:32:43.549369    4118 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 13:32:43.552882    4118 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 13:32:43.552913    4118 kapi.go:107] duration metric: took 9.914145ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.92491ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-073153 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-073153 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [260bf098-a9d0-4ca0-a081-0135e68ee0fc] Pending
helpers_test.go:352: "task-pv-pod" [260bf098-a9d0-4ca0-a081-0135e68ee0fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [260bf098-a9d0-4ca0-a081-0135e68ee0fc] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003782541s
addons_test.go:572: (dbg) Run:  kubectl --context addons-073153 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-073153 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-073153 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-073153 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-073153 delete pod task-pv-pod: (1.072301184s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-073153 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-073153 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-073153 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [bd48532e-937e-4ce9-b3a8-8d5c8145d127] Pending
helpers_test.go:352: "task-pv-pod-restore" [bd48532e-937e-4ce9-b3a8-8d5c8145d127] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [bd48532e-937e-4ce9-b3a8-8d5c8145d127] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003953308s
addons_test.go:614: (dbg) Run:  kubectl --context addons-073153 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-073153 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-073153 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable volumesnapshots --alsologtostderr -v=1: (1.193992426s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.293264357s)
--- PASS: TestAddons/parallel/CSI (48.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (24.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-073153 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-073153 --alsologtostderr -v=1: (1.506407987s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-f4ct2" [b37d6eed-de80-4c72-9896-5aa7327f5025] Pending
helpers_test.go:352: "headlamp-6f46646d79-f4ct2" [b37d6eed-de80-4c72-9896-5aa7327f5025] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-f4ct2" [b37d6eed-de80-4c72-9896-5aa7327f5025] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.003378682s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable headlamp --alsologtostderr -v=1: (5.907840369s)
--- PASS: TestAddons/parallel/Headlamp (24.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-kvd2q" [9329927d-e9e3-4b40-a5df-1f0e26d4ecb2] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004924731s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.80s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-073153 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-073153 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-073153 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c8e0be2c-e63e-4468-9100-b7dcd32c2ccb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c8e0be2c-e63e-4468-9100-b7dcd32c2ccb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c8e0be2c-e63e-4468-9100-b7dcd32c2ccb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004095245s
addons_test.go:967: (dbg) Run:  kubectl --context addons-073153 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 ssh "cat /opt/local-path-provisioner/pvc-c2d7a54c-51ed-414a-842a-12c1eec50413_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-073153 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-073153 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.049689477s)
--- PASS: TestAddons/parallel/LocalPath (52.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-zvnxz" [c65ef7f0-f994-4a46-b3c9-708aaf5ae640] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003327192s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-57x5p" [cbfe46c8-2330-4e1d-973e-08eaaede2b4f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00421564s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-073153 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-073153 addons disable yakd --alsologtostderr -v=1: (5.851342213s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-073153
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-073153: (11.971002361s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-073153
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-073153
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-073153
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (38.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-025519 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-025519 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.229740154s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-025519 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-025519 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-025519 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-025519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-025519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-025519: (1.983242916s)
--- PASS: TestCertOptions (38.87s)

                                                
                                    
x
+
TestCertExpiration (231.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-555191 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-555191 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.226431807s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-555191 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-555191 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (10.219650589s)
helpers_test.go:175: Cleaning up "cert-expiration-555191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-555191
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-555191: (2.49635298s)
--- PASS: TestCertExpiration (231.94s)

                                                
                                    
x
+
TestForceSystemdFlag (42.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-440051 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0908 14:08:04.506948    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-440051 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.127381939s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-440051 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-440051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-440051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-440051: (1.991279479s)
--- PASS: TestForceSystemdFlag (42.44s)

                                                
                                    
x
+
TestForceSystemdEnv (35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-559595 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-559595 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.496070371s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-559595 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-559595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-559595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-559595: (2.152602226s)
--- PASS: TestForceSystemdEnv (35.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.72s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-214830 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-214830 --driver=docker  --container-runtime=containerd: (32.086128932s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-214830"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nHYWdZCVfGHz/agent.26474" SSH_AGENT_PID="26475" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nHYWdZCVfGHz/agent.26474" SSH_AGENT_PID="26475" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nHYWdZCVfGHz/agent.26474" SSH_AGENT_PID="26475" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.317493864s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nHYWdZCVfGHz/agent.26474" SSH_AGENT_PID="26475" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-214830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-214830
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-214830: (1.956425929s)
--- PASS: TestDockerEnvContainerd (47.72s)

                                                
                                    
x
+
TestErrorSpam/setup (31.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-938765 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-938765 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-938765 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-938765 --driver=docker  --container-runtime=containerd: (31.732239902s)
--- PASS: TestErrorSpam/setup (31.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 stop: (1.231857403s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-938765 --log_dir /tmp/nospam-938765 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21504-2314/.minikube/files/etc/test/nested/copy/4118/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-478007 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0908 13:35:55.169136    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:55.175979    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:55.187434    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:55.208998    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:55.250385    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:55.331768    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:55.493207    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:55.814835    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:56.456819    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:57.739016    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:00.300522    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:05.423294    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:15.664638    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:36.146078    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-478007 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m31.179479906s)
--- PASS: TestFunctional/serial/StartWithProxy (91.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 13:36:57.019954    4118 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-478007 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-478007 --alsologtostderr -v=8: (6.793245019s)
functional_test.go:678: soft start took 6.794482022s for "functional-478007" cluster.
I0908 13:37:03.813500    4118 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-478007 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 cache add registry.k8s.io/pause:3.1: (1.395954728s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 cache add registry.k8s.io/pause:3.3: (1.191322362s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 cache add registry.k8s.io/pause:latest: (1.094385477s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-478007 /tmp/TestFunctionalserialCacheCmdcacheadd_local4028889564/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cache add minikube-local-cache-test:functional-478007
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cache delete minikube-local-cache-test:functional-478007
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-478007
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.242742ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 cache reload: (1.06341598s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 kubectl -- --context functional-478007 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-478007 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-478007 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 13:37:17.107393    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-478007 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.531771416s)
functional_test.go:776: restart took 42.531917158s for "functional-478007" cluster.
I0908 13:37:54.321633    4118 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (42.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-478007 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 logs: (1.675495427s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 logs --file /tmp/TestFunctionalserialLogsFileCmd2189405771/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 logs --file /tmp/TestFunctionalserialLogsFileCmd2189405771/001/logs.txt: (1.855317183s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-478007 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-478007
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-478007: exit status 115 (384.373967ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31822 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-478007 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-478007 delete -f testdata/invalidsvc.yaml: (1.078831851s)
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 config get cpus: exit status 14 (94.903587ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 config get cpus: exit status 14 (87.204632ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-478007 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-478007 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 42019: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-478007 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-478007 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (195.161648ms)

                                                
                                                
-- stdout --
	* [functional-478007] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:38:39.486967   41347 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:38:39.487135   41347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:38:39.487157   41347 out.go:374] Setting ErrFile to fd 2...
	I0908 13:38:39.487185   41347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:38:39.487453   41347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 13:38:39.487845   41347 out.go:368] Setting JSON to false
	I0908 13:38:39.488821   41347 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1270,"bootTime":1757337450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0908 13:38:39.488920   41347 start.go:140] virtualization:  
	I0908 13:38:39.492310   41347 out.go:179] * [functional-478007] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:38:39.495468   41347 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:38:39.495542   41347 notify.go:220] Checking for updates...
	I0908 13:38:39.501634   41347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:38:39.504706   41347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	I0908 13:38:39.507618   41347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	I0908 13:38:39.510586   41347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:38:39.513432   41347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:38:39.516751   41347 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:38:39.517367   41347 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:38:39.552671   41347 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:38:39.552824   41347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:38:39.613477   41347 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 13:38:39.603876785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:38:39.613627   41347 docker.go:318] overlay module found
	I0908 13:38:39.617669   41347 out.go:179] * Using the docker driver based on existing profile
	I0908 13:38:39.620572   41347 start.go:304] selected driver: docker
	I0908 13:38:39.620597   41347 start.go:918] validating driver "docker" against &{Name:functional-478007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-478007 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:38:39.620700   41347 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:38:39.624174   41347 out.go:203] 
	W0908 13:38:39.627100   41347 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 13:38:39.629932   41347 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-478007 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-478007 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-478007 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (201.339963ms)

                                                
                                                
-- stdout --
	* [functional-478007] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:38:39.298155   41300 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:38:39.298282   41300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:38:39.298293   41300 out.go:374] Setting ErrFile to fd 2...
	I0908 13:38:39.298298   41300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:38:39.299597   41300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 13:38:39.300040   41300 out.go:368] Setting JSON to false
	I0908 13:38:39.301056   41300 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1270,"bootTime":1757337450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0908 13:38:39.301134   41300 start.go:140] virtualization:  
	I0908 13:38:39.304530   41300 out.go:179] * [functional-478007] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 13:38:39.308395   41300 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:38:39.308531   41300 notify.go:220] Checking for updates...
	I0908 13:38:39.314007   41300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:38:39.316889   41300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	I0908 13:38:39.319727   41300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	I0908 13:38:39.322585   41300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:38:39.326323   41300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:38:39.329698   41300 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:38:39.330294   41300 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:38:39.361536   41300 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:38:39.361671   41300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:38:39.419241   41300 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 13:38:39.409507452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:38:39.419354   41300 docker.go:318] overlay module found
	I0908 13:38:39.422528   41300 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 13:38:39.425419   41300 start.go:304] selected driver: docker
	I0908 13:38:39.425447   41300 start.go:918] validating driver "docker" against &{Name:functional-478007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-478007 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:38:39.425587   41300 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:38:39.429230   41300 out.go:203] 
	W0908 13:38:39.432082   41300 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 13:38:39.435027   41300 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 status -o json
E0908 13:38:39.029922    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-478007 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-478007 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6g6z2" [9754c4cf-ff5b-4d77-893d-5408384fcd5c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-6g6z2" [9754c4cf-ff5b-4d77-893d-5408384fcd5c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004003369s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31218
functional_test.go:1680: http://192.168.49.2:31218: success! body:
Request served by hello-node-connect-7d85dfc575-6g6z2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31218
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6a04d674-4e83-4ff5-9a90-9121e3e62dd2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003638775s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-478007 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-478007 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-478007 get pvc myclaim -o=json
I0908 13:38:10.421730    4118 retry.go:31] will retry after 1.115738659s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8b053ac7-b819-4b68-a2f4-ecad171a3198 ResourceVersion:628 Generation:0 CreationTimestamp:2025-09-08 13:38:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x400159bad0 VolumeMode:0x400159bae0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-478007 get pvc myclaim -o=json
I0908 13:38:11.618004    4118 retry.go:31] will retry after 2.081881814s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8b053ac7-b819-4b68-a2f4-ecad171a3198 ResourceVersion:628 Generation:0 CreationTimestamp:2025-09-08 13:38:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x4001644060 VolumeMode:0x4001644070 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-478007 get pvc myclaim -o=json
I0908 13:38:13.776828    4118 retry.go:31] will retry after 3.561141522s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8b053ac7-b819-4b68-a2f4-ecad171a3198 ResourceVersion:628 Generation:0 CreationTimestamp:2025-09-08 13:38:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40016442c0 VolumeMode:0x40016442d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-478007 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-478007 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3412761a-fa8c-4f26-aae2-a31e9e05ce9c] Pending
helpers_test.go:352: "sp-pod" [3412761a-fa8c-4f26-aae2-a31e9e05ce9c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3412761a-fa8c-4f26-aae2-a31e9e05ce9c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003219914s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-478007 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-478007 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-478007 delete -f testdata/storage-provisioner/pod.yaml: (1.026078651s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-478007 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a7d14145-3e80-47af-afd8-3352826f72ff] Pending
helpers_test.go:352: "sp-pod" [a7d14145-3e80-47af-afd8-3352826f72ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a7d14145-3e80-47af-afd8-3352826f72ff] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002932682s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-478007 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh -n functional-478007 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cp functional-478007:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1526864964/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh -n functional-478007 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh -n functional-478007 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4118/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo cat /etc/test/nested/copy/4118/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4118.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo cat /etc/ssl/certs/4118.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4118.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo cat /usr/share/ca-certificates/4118.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo cat /etc/ssl/certs/41182.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo cat /usr/share/ca-certificates/41182.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-478007 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh "sudo systemctl is-active docker": exit status 1 (342.166055ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh "sudo systemctl is-active crio": exit status 1 (343.691826ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-478007 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-478007 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-478007 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 38874: os: process already finished
helpers_test.go:519: unable to terminate pid 38679: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-478007 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-478007 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-478007 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [89ded7a2-9151-4bb2-8964-2e7c449fb974] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [89ded7a2-9151-4bb2-8964-2e7c449fb974] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003113221s
I0908 13:38:13.919422    4118 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-478007 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.183.15 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-478007 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-478007 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-478007 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ngtx7" [c5db1ad3-9e75-4244-9359-f55de68932b1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-ngtx7" [c5db1ad3-9e75-4244-9359-f55de68932b1] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.0059563s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 service list -o json
functional_test.go:1504: Took "621.300891ms" to run "out/minikube-linux-arm64 -p functional-478007 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31396
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31396
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "356.500365ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.84267ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "373.679385ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.42267ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdany-port1800579017/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757338712802029440" to /tmp/TestFunctionalparallelMountCmdany-port1800579017/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757338712802029440" to /tmp/TestFunctionalparallelMountCmdany-port1800579017/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757338712802029440" to /tmp/TestFunctionalparallelMountCmdany-port1800579017/001/test-1757338712802029440
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.58992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:38:33.142907    4118 retry.go:31] will retry after 454.217681ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 13:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 13:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 13:38 test-1757338712802029440
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh cat /mount-9p/test-1757338712802029440
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-478007 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1b5442cc-dc97-4169-a35d-c4edb3b366bf] Pending
helpers_test.go:352: "busybox-mount" [1b5442cc-dc97-4169-a35d-c4edb3b366bf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1b5442cc-dc97-4169-a35d-c4edb3b366bf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1b5442cc-dc97-4169-a35d-c4edb3b366bf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.008407585s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-478007 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdany-port1800579017/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdspecific-port3074644773/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (495.070251ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:38:41.461296    4118 retry.go:31] will retry after 390.765271ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdspecific-port3074644773/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh "sudo umount -f /mount-9p": exit status 1 (310.941176ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-478007 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdspecific-port3074644773/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup744900379/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup744900379/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup744900379/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T" /mount1: exit status 1 (677.796436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:38:43.804101    4118 retry.go:31] will retry after 455.449817ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-478007 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup744900379/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup744900379/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-478007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup744900379/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 version -o=json --components: (1.483687511s)
--- PASS: TestFunctional/parallel/Version/components (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-478007 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-478007
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-478007
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-478007 image ls --format short --alsologtostderr:
I0908 13:38:53.880084   44206 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:53.880280   44206 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:53.880307   44206 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:53.880327   44206 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:53.880616   44206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
I0908 13:38:53.881350   44206 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:53.881553   44206 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:53.882035   44206 cli_runner.go:164] Run: docker container inspect functional-478007 --format={{.State.Status}}
I0908 13:38:53.908879   44206 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:53.908930   44206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-478007
I0908 13:38:53.930266   44206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/functional-478007/id_rsa Username:docker}
I0908 13:38:54.029288   44206 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-478007 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kicbase/echo-server               │ functional-478007  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-478007  │ sha256:00a309 │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:35f3cb │ 22.9MB │
│ docker.io/library/nginx                     │ latest             │ sha256:47ef87 │ 68.9MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:d29193 │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:996be7 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:a25f5e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:6fc32d │ 22.8MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-478007 image ls --format table --alsologtostderr:
I0908 13:38:54.561566   44364 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:54.561803   44364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:54.561839   44364 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:54.562473   44364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:54.562785   44364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
I0908 13:38:54.563751   44364 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:54.564054   44364 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:54.566057   44364 cli_runner.go:164] Run: docker container inspect functional-478007 --format={{.State.Status}}
I0908 13:38:54.597840   44364 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:54.597892   44364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-478007
I0908 13:38:54.620595   44364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/functional-478007/id_rsa Username:docker}
I0908 13:38:54.717406   44364 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-478007 image ls --format json --alsologtostderr:
[{"id":"sha256:a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"15779792"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s
-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-478007"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/
kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:00a30909728231c01bfad79c372b6622609f5b7604c895d2cdc08480629b555c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-478007"],"size":"991"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"24570751"},{"id":"sha256:6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"22788036"},{"id"
:"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22948447"},{"id":"sha256:47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"68855984"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e",
"repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"20720494"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-478007 image ls --format json --alsologtostderr:
I0908 13:38:54.225983   44274 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:54.226268   44274 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:54.226288   44274 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:54.226299   44274 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:54.226717   44274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
I0908 13:38:54.227443   44274 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:54.227593   44274 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:54.228182   44274 cli_runner.go:164] Run: docker container inspect functional-478007 --format={{.State.Status}}
I0908 13:38:54.257729   44274 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:54.257800   44274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-478007
I0908 13:38:54.298646   44274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/functional-478007/id_rsa Username:docker}
I0908 13:38:54.401001   44274 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-478007 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "20720494"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:00a30909728231c01bfad79c372b6622609f5b7604c895d2cdc08480629b555c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-478007
size: "991"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-478007
size: "2173567"
- id: sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22948447"
- id: sha256:47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "68855984"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "15779792"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "22788036"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "24570751"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-478007 image ls --format yaml --alsologtostderr:
I0908 13:38:53.891322   44207 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:53.891429   44207 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:53.891442   44207 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:53.891447   44207 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:53.891696   44207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
I0908 13:38:53.892291   44207 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:53.892449   44207 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:53.892887   44207 cli_runner.go:164] Run: docker container inspect functional-478007 --format={{.State.Status}}
I0908 13:38:53.917435   44207 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:53.917489   44207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-478007
I0908 13:38:53.943059   44207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/functional-478007/id_rsa Username:docker}
I0908 13:38:54.067852   44207 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-478007 ssh pgrep buildkitd: exit status 1 (351.150842ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image build -t localhost/my-image:functional-478007 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 image build -t localhost/my-image:functional-478007 testdata/build --alsologtostderr: (3.73768189s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-478007 image build -t localhost/my-image:functional-478007 testdata/build --alsologtostderr:
I0908 13:38:54.508108   44358 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:54.508391   44358 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:54.508406   44358 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:54.508412   44358 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:54.508716   44358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
I0908 13:38:54.509425   44358 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:54.511044   44358 config.go:182] Loaded profile config "functional-478007": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 13:38:54.511547   44358 cli_runner.go:164] Run: docker container inspect functional-478007 --format={{.State.Status}}
I0908 13:38:54.540506   44358 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:54.540566   44358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-478007
I0908 13:38:54.568295   44358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/functional-478007/id_rsa Username:docker}
I0908 13:38:54.658279   44358 build_images.go:161] Building image from path: /tmp/build.1005961900.tar
I0908 13:38:54.658350   44358 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 13:38:54.669224   44358 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1005961900.tar
I0908 13:38:54.672781   44358 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1005961900.tar: stat -c "%s %y" /var/lib/minikube/build/build.1005961900.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1005961900.tar': No such file or directory
I0908 13:38:54.672816   44358 ssh_runner.go:362] scp /tmp/build.1005961900.tar --> /var/lib/minikube/build/build.1005961900.tar (3072 bytes)
I0908 13:38:54.699741   44358 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1005961900
I0908 13:38:54.709872   44358 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1005961900 -xf /var/lib/minikube/build/build.1005961900.tar
I0908 13:38:54.721541   44358 containerd.go:394] Building image: /var/lib/minikube/build/build.1005961900
I0908 13:38:54.721619   44358 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1005961900 --local dockerfile=/var/lib/minikube/build/build.1005961900 --output type=image,name=localhost/my-image:functional-478007
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:569b617dc22cfac0d0ce0c771924794cb2283445d4f179e02e500a7254cf5424
#8 exporting manifest sha256:569b617dc22cfac0d0ce0c771924794cb2283445d4f179e02e500a7254cf5424 0.0s done
#8 exporting config sha256:9a196a6862a0b1bd3ad930abffe3a99fac0e2b37fdcf6c926f5339a3d0867a6d 0.0s done
#8 naming to localhost/my-image:functional-478007 done
#8 DONE 0.2s
I0908 13:38:58.146976   44358 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1005961900 --local dockerfile=/var/lib/minikube/build/build.1005961900 --output type=image,name=localhost/my-image:functional-478007: (3.425325397s)
I0908 13:38:58.147050   44358 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1005961900
I0908 13:38:58.156662   44358 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1005961900.tar
I0908 13:38:58.165360   44358 build_images.go:217] Built localhost/my-image:functional-478007 from /tmp/build.1005961900.tar
I0908 13:38:58.165391   44358 build_images.go:133] succeeded building to: functional-478007
I0908 13:38:58.165403   44358 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-478007
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image load --daemon kicbase/echo-server:functional-478007 --alsologtostderr
2025/09/08 13:38:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 image load --daemon kicbase/echo-server:functional-478007 --alsologtostderr: (1.150746909s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image load --daemon kicbase/echo-server:functional-478007 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-478007 image load --daemon kicbase/echo-server:functional-478007 --alsologtostderr: (1.19053977s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-478007
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image load --daemon kicbase/echo-server:functional-478007 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image save kicbase/echo-server:functional-478007 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image rm kicbase/echo-server:functional-478007 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-478007
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-478007 image save --daemon kicbase/echo-server:functional-478007 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-478007
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-478007
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-478007
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-478007
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (130.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0908 13:40:55.167522    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m9.773877525s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (130.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (41.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- rollout status deployment/busybox
E0908 13:41:22.876614    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 kubectl -- rollout status deployment/busybox: (38.804087841s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-4j8wl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-b2tjd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-r5f97 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-4j8wl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-b2tjd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-r5f97 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-4j8wl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-b2tjd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-r5f97 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (41.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-4j8wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-4j8wl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-b2tjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-b2tjd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-r5f97 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 kubectl -- exec busybox-7b57f96db7-r5f97 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (17.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 node add --alsologtostderr -v 5: (16.244352546s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5: (1.504302506s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (17.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-242579 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.113049136s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 status --output json --alsologtostderr -v 5: (1.00762749s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp testdata/cp-test.txt ha-242579:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1495655192/001/cp-test_ha-242579.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579:/home/docker/cp-test.txt ha-242579-m02:/home/docker/cp-test_ha-242579_ha-242579-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test_ha-242579_ha-242579-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579:/home/docker/cp-test.txt ha-242579-m03:/home/docker/cp-test_ha-242579_ha-242579-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test_ha-242579_ha-242579-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579:/home/docker/cp-test.txt ha-242579-m04:/home/docker/cp-test_ha-242579_ha-242579-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test_ha-242579_ha-242579-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp testdata/cp-test.txt ha-242579-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1495655192/001/cp-test_ha-242579-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m02:/home/docker/cp-test.txt ha-242579:/home/docker/cp-test_ha-242579-m02_ha-242579.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test_ha-242579-m02_ha-242579.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m02:/home/docker/cp-test.txt ha-242579-m03:/home/docker/cp-test_ha-242579-m02_ha-242579-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test_ha-242579-m02_ha-242579-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m02:/home/docker/cp-test.txt ha-242579-m04:/home/docker/cp-test_ha-242579-m02_ha-242579-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test_ha-242579-m02_ha-242579-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp testdata/cp-test.txt ha-242579-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1495655192/001/cp-test_ha-242579-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m03:/home/docker/cp-test.txt ha-242579:/home/docker/cp-test_ha-242579-m03_ha-242579.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test_ha-242579-m03_ha-242579.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m03:/home/docker/cp-test.txt ha-242579-m02:/home/docker/cp-test_ha-242579-m03_ha-242579-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test_ha-242579-m03_ha-242579-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m03:/home/docker/cp-test.txt ha-242579-m04:/home/docker/cp-test_ha-242579-m03_ha-242579-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test_ha-242579-m03_ha-242579-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp testdata/cp-test.txt ha-242579-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1495655192/001/cp-test_ha-242579-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m04:/home/docker/cp-test.txt ha-242579:/home/docker/cp-test_ha-242579-m04_ha-242579.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579 "sudo cat /home/docker/cp-test_ha-242579-m04_ha-242579.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m04:/home/docker/cp-test.txt ha-242579-m02:/home/docker/cp-test_ha-242579-m04_ha-242579-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m02 "sudo cat /home/docker/cp-test_ha-242579-m04_ha-242579-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 cp ha-242579-m04:/home/docker/cp-test.txt ha-242579-m03:/home/docker/cp-test_ha-242579-m04_ha-242579-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 ssh -n ha-242579-m03 "sudo cat /home/docker/cp-test_ha-242579-m04_ha-242579-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 node stop m02 --alsologtostderr -v 5: (12.172998199s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5: exit status 7 (781.438154ms)

                                                
                                                
-- stdout --
	ha-242579
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-242579-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-242579-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-242579-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:42:45.689008   61307 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:42:45.689218   61307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:45.689248   61307 out.go:374] Setting ErrFile to fd 2...
	I0908 13:42:45.689269   61307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:45.689559   61307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 13:42:45.689891   61307 out.go:368] Setting JSON to false
	I0908 13:42:45.690078   61307 mustload.go:65] Loading cluster: ha-242579
	I0908 13:42:45.690075   61307 notify.go:220] Checking for updates...
	I0908 13:42:45.690554   61307 config.go:182] Loaded profile config "ha-242579": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:42:45.690601   61307 status.go:174] checking status of ha-242579 ...
	I0908 13:42:45.691211   61307 cli_runner.go:164] Run: docker container inspect ha-242579 --format={{.State.Status}}
	I0908 13:42:45.711353   61307 status.go:371] ha-242579 host status = "Running" (err=<nil>)
	I0908 13:42:45.711374   61307 host.go:66] Checking if "ha-242579" exists ...
	I0908 13:42:45.711677   61307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-242579
	I0908 13:42:45.746182   61307 host.go:66] Checking if "ha-242579" exists ...
	I0908 13:42:45.746567   61307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:42:45.746612   61307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-242579
	I0908 13:42:45.780616   61307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/ha-242579/id_rsa Username:docker}
	I0908 13:42:45.877873   61307 ssh_runner.go:195] Run: systemctl --version
	I0908 13:42:45.882044   61307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:42:45.901683   61307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:42:45.982918   61307 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-08 13:42:45.970483303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:42:45.983462   61307 kubeconfig.go:125] found "ha-242579" server: "https://192.168.49.254:8443"
	I0908 13:42:45.983498   61307 api_server.go:166] Checking apiserver status ...
	I0908 13:42:45.983538   61307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:42:45.995183   61307 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1444/cgroup
	I0908 13:42:46.007211   61307 api_server.go:182] apiserver freezer: "8:freezer:/docker/9592831c036bdf9fe2147c1c95f73fd9318bbcbe4261180af5285d8746eb3ff5/kubepods/burstable/poddeeca14339a5754786d37a57e764ffb8/28b1303d52da194f4dc19a417b3170f3c9ba2fa6e6f68a6ffb2c0b7075ee4f67"
	I0908 13:42:46.007288   61307 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9592831c036bdf9fe2147c1c95f73fd9318bbcbe4261180af5285d8746eb3ff5/kubepods/burstable/poddeeca14339a5754786d37a57e764ffb8/28b1303d52da194f4dc19a417b3170f3c9ba2fa6e6f68a6ffb2c0b7075ee4f67/freezer.state
	I0908 13:42:46.017879   61307 api_server.go:204] freezer state: "THAWED"
	I0908 13:42:46.017910   61307 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 13:42:46.026458   61307 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 13:42:46.026491   61307 status.go:463] ha-242579 apiserver status = Running (err=<nil>)
	I0908 13:42:46.026503   61307 status.go:176] ha-242579 status: &{Name:ha-242579 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:42:46.026519   61307 status.go:174] checking status of ha-242579-m02 ...
	I0908 13:42:46.026851   61307 cli_runner.go:164] Run: docker container inspect ha-242579-m02 --format={{.State.Status}}
	I0908 13:42:46.046211   61307 status.go:371] ha-242579-m02 host status = "Stopped" (err=<nil>)
	I0908 13:42:46.046233   61307 status.go:384] host is not running, skipping remaining checks
	I0908 13:42:46.046239   61307 status.go:176] ha-242579-m02 status: &{Name:ha-242579-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:42:46.046257   61307 status.go:174] checking status of ha-242579-m03 ...
	I0908 13:42:46.046599   61307 cli_runner.go:164] Run: docker container inspect ha-242579-m03 --format={{.State.Status}}
	I0908 13:42:46.065687   61307 status.go:371] ha-242579-m03 host status = "Running" (err=<nil>)
	I0908 13:42:46.065715   61307 host.go:66] Checking if "ha-242579-m03" exists ...
	I0908 13:42:46.066052   61307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-242579-m03
	I0908 13:42:46.087316   61307 host.go:66] Checking if "ha-242579-m03" exists ...
	I0908 13:42:46.087641   61307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:42:46.087697   61307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-242579-m03
	I0908 13:42:46.106647   61307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/ha-242579-m03/id_rsa Username:docker}
	I0908 13:42:46.193901   61307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:42:46.206837   61307 kubeconfig.go:125] found "ha-242579" server: "https://192.168.49.254:8443"
	I0908 13:42:46.206870   61307 api_server.go:166] Checking apiserver status ...
	I0908 13:42:46.206909   61307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:42:46.220710   61307 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup
	I0908 13:42:46.230075   61307 api_server.go:182] apiserver freezer: "8:freezer:/docker/e4839cc9c4ff69068dc6ae54f431364832e8229bdf2695acac94cc01abe02b0e/kubepods/burstable/podac01d6c87feb90b4bd8d76cfc91861d5/508d5bea4f2993b27e4f9cbb1c4d5ad51d565d5bc12b67fc240a10ba225052e1"
	I0908 13:42:46.230158   61307 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e4839cc9c4ff69068dc6ae54f431364832e8229bdf2695acac94cc01abe02b0e/kubepods/burstable/podac01d6c87feb90b4bd8d76cfc91861d5/508d5bea4f2993b27e4f9cbb1c4d5ad51d565d5bc12b67fc240a10ba225052e1/freezer.state
	I0908 13:42:46.242040   61307 api_server.go:204] freezer state: "THAWED"
	I0908 13:42:46.242066   61307 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 13:42:46.250504   61307 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 13:42:46.250535   61307 status.go:463] ha-242579-m03 apiserver status = Running (err=<nil>)
	I0908 13:42:46.250546   61307 status.go:176] ha-242579-m03 status: &{Name:ha-242579-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:42:46.250561   61307 status.go:174] checking status of ha-242579-m04 ...
	I0908 13:42:46.250909   61307 cli_runner.go:164] Run: docker container inspect ha-242579-m04 --format={{.State.Status}}
	I0908 13:42:46.268968   61307 status.go:371] ha-242579-m04 host status = "Running" (err=<nil>)
	I0908 13:42:46.268992   61307 host.go:66] Checking if "ha-242579-m04" exists ...
	I0908 13:42:46.269284   61307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-242579-m04
	I0908 13:42:46.286602   61307 host.go:66] Checking if "ha-242579-m04" exists ...
	I0908 13:42:46.286921   61307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:42:46.286966   61307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-242579-m04
	I0908 13:42:46.311293   61307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/ha-242579-m04/id_rsa Username:docker}
	I0908 13:42:46.401181   61307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:42:46.413318   61307 status.go:176] ha-242579-m04 status: &{Name:ha-242579-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (11.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 node start m02 --alsologtostderr -v 5: (10.540804966s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5: (1.298702168s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (11.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.724216362s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 stop --alsologtostderr -v 5
E0908 13:43:04.507051    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:04.513350    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:04.524698    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:04.546144    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:04.587502    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:04.668943    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:04.830411    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:05.151872    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:05.793611    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:07.075277    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:09.637913    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:14.759293    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:25.001076    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 stop --alsologtostderr -v 5: (37.03297588s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 start --wait true --alsologtostderr -v 5
E0908 13:43:45.482501    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:44:26.443840    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 start --wait true --alsologtostderr -v 5: (1m8.783385468s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 node delete m03 --alsologtostderr -v 5: (9.376727447s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 stop --alsologtostderr -v 5: (35.890748451s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5: exit status 7 (107.997551ms)

                                                
                                                
-- stdout --
	ha-242579
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-242579-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-242579-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:45:33.810781   76233 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:45:33.810962   76233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:45:33.810973   76233 out.go:374] Setting ErrFile to fd 2...
	I0908 13:45:33.810980   76233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:45:33.811213   76233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 13:45:33.811407   76233 out.go:368] Setting JSON to false
	I0908 13:45:33.811449   76233 mustload.go:65] Loading cluster: ha-242579
	I0908 13:45:33.811559   76233 notify.go:220] Checking for updates...
	I0908 13:45:33.811858   76233 config.go:182] Loaded profile config "ha-242579": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:45:33.811885   76233 status.go:174] checking status of ha-242579 ...
	I0908 13:45:33.812453   76233 cli_runner.go:164] Run: docker container inspect ha-242579 --format={{.State.Status}}
	I0908 13:45:33.832186   76233 status.go:371] ha-242579 host status = "Stopped" (err=<nil>)
	I0908 13:45:33.832209   76233 status.go:384] host is not running, skipping remaining checks
	I0908 13:45:33.832215   76233 status.go:176] ha-242579 status: &{Name:ha-242579 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:45:33.832257   76233 status.go:174] checking status of ha-242579-m02 ...
	I0908 13:45:33.832599   76233 cli_runner.go:164] Run: docker container inspect ha-242579-m02 --format={{.State.Status}}
	I0908 13:45:33.854196   76233 status.go:371] ha-242579-m02 host status = "Stopped" (err=<nil>)
	I0908 13:45:33.854220   76233 status.go:384] host is not running, skipping remaining checks
	I0908 13:45:33.854227   76233 status.go:176] ha-242579-m02 status: &{Name:ha-242579-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:45:33.854247   76233 status.go:174] checking status of ha-242579-m04 ...
	I0908 13:45:33.854539   76233 cli_runner.go:164] Run: docker container inspect ha-242579-m04 --format={{.State.Status}}
	I0908 13:45:33.870963   76233 status.go:371] ha-242579-m04 host status = "Stopped" (err=<nil>)
	I0908 13:45:33.870985   76233 status.go:384] host is not running, skipping remaining checks
	I0908 13:45:33.870992   76233 status.go:176] ha-242579-m04 status: &{Name:ha-242579-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0908 13:45:48.365556    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:55.166705    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m0.023454531s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (30.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 node add --control-plane --alsologtostderr -v 5: (28.880912379s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-242579 status --alsologtostderr -v 5: (1.31563227s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (30.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.103403143s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-882438 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0908 13:48:04.512528    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:48:32.212499    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-882438 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m22.963575326s)
--- PASS: TestJSONOutput/start/Command (82.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-882438 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-882438 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-882438 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-882438 --output=json --user=testUser: (5.711705192s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-759723 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-759723 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.979277ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d8474ef-2dec-4ac0-9240-11fcb6487960","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-759723] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb87ba5e-a16b-443a-8867-0064730dbe2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"d66fa608-3e69-46d7-afff-f09cb8bae24c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27b6957f-6230-4266-afcf-eecf92b9476c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig"}}
	{"specversion":"1.0","id":"32096ac3-a1a5-4210-8cf6-93eb1f92627b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube"}}
	{"specversion":"1.0","id":"872a29a6-4774-47b5-bcc2-6bd703e5807a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"eceacfbe-695e-4cfc-9aa1-ecdd2804b47c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"599ed06b-96d0-4cc8-9bc9-afe936fe38bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-759723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-759723
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-394640 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-394640 --network=: (42.280947559s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-394640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-394640
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-394640: (2.109576101s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.41s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-531828 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-531828 --network=bridge: (32.360405268s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-531828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-531828
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-531828: (2.035726466s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.42s)

                                                
                                    
x
+
TestKicExistingNetwork (37.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 13:50:10.449186    4118 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 13:50:10.463913    4118 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 13:50:10.463996    4118 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 13:50:10.464013    4118 cli_runner.go:164] Run: docker network inspect existing-network
W0908 13:50:10.480174    4118 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 13:50:10.480207    4118 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 13:50:10.480223    4118 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 13:50:10.480324    4118 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 13:50:10.496549    4118 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-431c1a61966e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:58:d5:96:47:2e} reservation:<nil>}
I0908 13:50:10.496804    4118 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bf9070}
I0908 13:50:10.496825    4118 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 13:50:10.496876    4118 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 13:50:10.555604    4118 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-636885 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-636885 --network=existing-network: (35.600385629s)
helpers_test.go:175: Cleaning up "existing-network-636885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-636885
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-636885: (1.941717563s)
I0908 13:50:48.114115    4118 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.68s)

                                                
                                    
x
+
TestKicCustomSubnet (34.23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-323352 --subnet=192.168.60.0/24
E0908 13:50:55.167384    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-323352 --subnet=192.168.60.0/24: (32.127748776s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-323352 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-323352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-323352
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-323352: (2.076791222s)
--- PASS: TestKicCustomSubnet (34.23s)

                                                
                                    
x
+
TestKicStaticIP (36.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-293761 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-293761 --static-ip=192.168.200.200: (33.903319384s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-293761 ip
helpers_test.go:175: Cleaning up "static-ip-293761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-293761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-293761: (2.033151953s)
--- PASS: TestKicStaticIP (36.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-550049 --driver=docker  --container-runtime=containerd
E0908 13:52:18.238038    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-550049 --driver=docker  --container-runtime=containerd: (35.21463205s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-552519 --driver=docker  --container-runtime=containerd
E0908 13:53:04.507022    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-552519 --driver=docker  --container-runtime=containerd: (32.575175789s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-550049
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-552519
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-552519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-552519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-552519: (1.990314445s)
helpers_test.go:175: Cleaning up "first-550049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-550049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-550049: (1.926490163s)
--- PASS: TestMinikubeProfile (73.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-828728 --memory=3072 --mount-string /tmp/TestMountStartserial2413176250/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-828728 --memory=3072 --mount-string /tmp/TestMountStartserial2413176250/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.722036881s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-828728 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-830722 --memory=3072 --mount-string /tmp/TestMountStartserial2413176250/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-830722 --memory=3072 --mount-string /tmp/TestMountStartserial2413176250/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.438236386s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-830722 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-828728 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-828728 --alsologtostderr -v=5: (1.600303621s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-830722 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-830722
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-830722: (1.200010059s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-830722
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-830722: (6.698116936s)
--- PASS: TestMountStart/serial/RestartStopped (7.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-830722 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-789083 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-789083 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m34.253899888s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-789083 -- rollout status deployment/busybox: (15.726608444s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-8ntck -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-k97vc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-8ntck -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-k97vc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-8ntck -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-k97vc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-8ntck -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-8ntck -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-k97vc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-789083 -- exec busybox-7b57f96db7-k97vc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (13.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-789083 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-789083 -v=5 --alsologtostderr: (13.126739488s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (13.78s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-789083 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp testdata/cp-test.txt multinode-789083:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2154721793/001/cp-test_multinode-789083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083:/home/docker/cp-test.txt multinode-789083-m02:/home/docker/cp-test_multinode-789083_multinode-789083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m02 "sudo cat /home/docker/cp-test_multinode-789083_multinode-789083-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083:/home/docker/cp-test.txt multinode-789083-m03:/home/docker/cp-test_multinode-789083_multinode-789083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m03 "sudo cat /home/docker/cp-test_multinode-789083_multinode-789083-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp testdata/cp-test.txt multinode-789083-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2154721793/001/cp-test_multinode-789083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083-m02:/home/docker/cp-test.txt multinode-789083:/home/docker/cp-test_multinode-789083-m02_multinode-789083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083 "sudo cat /home/docker/cp-test_multinode-789083-m02_multinode-789083.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083-m02:/home/docker/cp-test.txt multinode-789083-m03:/home/docker/cp-test_multinode-789083-m02_multinode-789083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m03 "sudo cat /home/docker/cp-test_multinode-789083-m02_multinode-789083-m03.txt"
E0908 13:55:55.167353    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp testdata/cp-test.txt multinode-789083-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2154721793/001/cp-test_multinode-789083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083-m03:/home/docker/cp-test.txt multinode-789083:/home/docker/cp-test_multinode-789083-m03_multinode-789083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083 "sudo cat /home/docker/cp-test_multinode-789083-m03_multinode-789083.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 cp multinode-789083-m03:/home/docker/cp-test.txt multinode-789083-m02:/home/docker/cp-test_multinode-789083-m03_multinode-789083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 ssh -n multinode-789083-m02 "sudo cat /home/docker/cp-test_multinode-789083-m03_multinode-789083-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-789083 node stop m03: (1.202508527s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-789083 status: exit status 7 (688.774392ms)

                                                
                                                
-- stdout --
	multinode-789083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-789083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-789083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr: exit status 7 (644.856438ms)

                                                
                                                
-- stdout --
	multinode-789083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-789083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-789083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:56:00.398515  130351 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:56:00.403427  130351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:00.403491  130351 out.go:374] Setting ErrFile to fd 2...
	I0908 13:56:00.403522  130351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:00.403955  130351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 13:56:00.404253  130351 out.go:368] Setting JSON to false
	I0908 13:56:00.404511  130351 notify.go:220] Checking for updates...
	I0908 13:56:00.404552  130351 mustload.go:65] Loading cluster: multinode-789083
	I0908 13:56:00.406584  130351 config.go:182] Loaded profile config "multinode-789083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:56:00.406659  130351 status.go:174] checking status of multinode-789083 ...
	I0908 13:56:00.407521  130351 cli_runner.go:164] Run: docker container inspect multinode-789083 --format={{.State.Status}}
	I0908 13:56:00.448801  130351 status.go:371] multinode-789083 host status = "Running" (err=<nil>)
	I0908 13:56:00.448837  130351 host.go:66] Checking if "multinode-789083" exists ...
	I0908 13:56:00.449178  130351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-789083
	I0908 13:56:00.470167  130351 host.go:66] Checking if "multinode-789083" exists ...
	I0908 13:56:00.470520  130351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:56:00.470574  130351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-789083
	I0908 13:56:00.497654  130351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/multinode-789083/id_rsa Username:docker}
	I0908 13:56:00.601900  130351 ssh_runner.go:195] Run: systemctl --version
	I0908 13:56:00.606710  130351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:56:00.619246  130351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:00.675494  130351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 13:56:00.665574621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:56:00.676047  130351 kubeconfig.go:125] found "multinode-789083" server: "https://192.168.67.2:8443"
	I0908 13:56:00.676091  130351 api_server.go:166] Checking apiserver status ...
	I0908 13:56:00.676138  130351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:56:00.688463  130351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1481/cgroup
	I0908 13:56:00.698606  130351 api_server.go:182] apiserver freezer: "8:freezer:/docker/4f5e9089fa02127e4e13801c2da3f64c9d482aa1fbca9b75e04b2ce167626adc/kubepods/burstable/podb14ab027291b6958aae8cfa54ad4d23b/efa138f13a83494bfd76f74b2a0726515a9fc5145e0a17e7e67905879fc5b1d9"
	I0908 13:56:00.698674  130351 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4f5e9089fa02127e4e13801c2da3f64c9d482aa1fbca9b75e04b2ce167626adc/kubepods/burstable/podb14ab027291b6958aae8cfa54ad4d23b/efa138f13a83494bfd76f74b2a0726515a9fc5145e0a17e7e67905879fc5b1d9/freezer.state
	I0908 13:56:00.707674  130351 api_server.go:204] freezer state: "THAWED"
	I0908 13:56:00.707702  130351 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 13:56:00.716241  130351 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 13:56:00.716276  130351 status.go:463] multinode-789083 apiserver status = Running (err=<nil>)
	I0908 13:56:00.716289  130351 status.go:176] multinode-789083 status: &{Name:multinode-789083 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:56:00.716312  130351 status.go:174] checking status of multinode-789083-m02 ...
	I0908 13:56:00.716692  130351 cli_runner.go:164] Run: docker container inspect multinode-789083-m02 --format={{.State.Status}}
	I0908 13:56:00.734180  130351 status.go:371] multinode-789083-m02 host status = "Running" (err=<nil>)
	I0908 13:56:00.734258  130351 host.go:66] Checking if "multinode-789083-m02" exists ...
	I0908 13:56:00.734618  130351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-789083-m02
	I0908 13:56:00.753325  130351 host.go:66] Checking if "multinode-789083-m02" exists ...
	I0908 13:56:00.753653  130351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:56:00.753698  130351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-789083-m02
	I0908 13:56:00.774022  130351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21504-2314/.minikube/machines/multinode-789083-m02/id_rsa Username:docker}
	I0908 13:56:00.861813  130351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:56:00.881188  130351 status.go:176] multinode-789083-m02 status: &{Name:multinode-789083-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:56:00.881219  130351 status.go:174] checking status of multinode-789083-m03 ...
	I0908 13:56:00.881532  130351 cli_runner.go:164] Run: docker container inspect multinode-789083-m03 --format={{.State.Status}}
	I0908 13:56:00.901692  130351 status.go:371] multinode-789083-m03 host status = "Stopped" (err=<nil>)
	I0908 13:56:00.901716  130351 status.go:384] host is not running, skipping remaining checks
	I0908 13:56:00.901737  130351 status.go:176] multinode-789083-m03 status: &{Name:multinode-789083-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-789083 node start m03 -v=5 --alsologtostderr: (6.894022479s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-789083
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-789083
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-789083: (24.989083775s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-789083 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-789083 --wait=true -v=5 --alsologtostderr: (54.705981721s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-789083
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-789083 node delete m03: (4.839611554s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-789083 stop: (23.765456088s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-789083 status: exit status 7 (91.792843ms)

                                                
                                                
-- stdout --
	multinode-789083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-789083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr: exit status 7 (98.567604ms)

                                                
                                                
-- stdout --
	multinode-789083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-789083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:57:57.861601  139008 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:57:57.861792  139008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:57:57.861820  139008 out.go:374] Setting ErrFile to fd 2...
	I0908 13:57:57.861841  139008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:57:57.862130  139008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 13:57:57.862364  139008 out.go:368] Setting JSON to false
	I0908 13:57:57.862443  139008 mustload.go:65] Loading cluster: multinode-789083
	I0908 13:57:57.862495  139008 notify.go:220] Checking for updates...
	I0908 13:57:57.863465  139008 config.go:182] Loaded profile config "multinode-789083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 13:57:57.863526  139008 status.go:174] checking status of multinode-789083 ...
	I0908 13:57:57.864105  139008 cli_runner.go:164] Run: docker container inspect multinode-789083 --format={{.State.Status}}
	I0908 13:57:57.883365  139008 status.go:371] multinode-789083 host status = "Stopped" (err=<nil>)
	I0908 13:57:57.883386  139008 status.go:384] host is not running, skipping remaining checks
	I0908 13:57:57.883392  139008 status.go:176] multinode-789083 status: &{Name:multinode-789083 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:57:57.883420  139008 status.go:174] checking status of multinode-789083-m02 ...
	I0908 13:57:57.883749  139008 cli_runner.go:164] Run: docker container inspect multinode-789083-m02 --format={{.State.Status}}
	I0908 13:57:57.909029  139008 status.go:371] multinode-789083-m02 host status = "Stopped" (err=<nil>)
	I0908 13:57:57.909049  139008 status.go:384] host is not running, skipping remaining checks
	I0908 13:57:57.909056  139008 status.go:176] multinode-789083-m02 status: &{Name:multinode-789083-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-789083 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E0908 13:58:04.507235    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-789083 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.388210502s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-789083 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-789083
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-789083-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-789083-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.135148ms)

                                                
                                                
-- stdout --
	* [multinode-789083-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-789083-m02' is duplicated with machine name 'multinode-789083-m02' in profile 'multinode-789083'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-789083-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-789083-m03 --driver=docker  --container-runtime=containerd: (31.163781616s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-789083
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-789083: exit status 80 (430.419403ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-789083 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-789083-m03 already exists in multinode-789083-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-789083-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-789083-m03: (1.95495245s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.70s)

                                                
                                    
x
+
TestPreload (146.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-053548 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0908 13:59:27.574043    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-053548 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m17.873433663s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-053548 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-053548 image pull gcr.io/k8s-minikube/busybox: (2.393744449s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-053548
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-053548: (5.74086057s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-053548 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0908 14:00:55.166964    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-053548 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (57.980669084s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-053548 image list
helpers_test.go:175: Cleaning up "test-preload-053548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-053548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-053548: (2.344018743s)
--- PASS: TestPreload (146.58s)

                                                
                                    
x
+
TestInsufficientStorage (9.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-412134 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-412134 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.514691947s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1125a934-cb0a-490d-9d5b-78196b549511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-412134] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"18b4bba5-e585-4512-a074-a92b5b4259e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"36ee117d-3375-4618-bddb-eaf1247c2c59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc8f250c-5ddf-43f6-abe6-6ec7cf1b3391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig"}}
	{"specversion":"1.0","id":"0c4ce783-e930-45ea-9b3c-b178912572b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube"}}
	{"specversion":"1.0","id":"13705e7b-9548-44a2-829a-4b295aa4cb7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3e95c601-dd92-4b03-b99d-728a37764c90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c15ab467-fcee-4276-ac9e-dcafa8c9f1ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7ff25f4c-03c0-402b-8139-fc28aa4ec49b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"893b1b89-0033-4275-922f-c4030234345b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c34fe4d3-3750-4772-9bd8-356c6a183a9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4079035f-d1c0-4177-90da-24ff223c0c4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-412134\" primary control-plane node in \"insufficient-storage-412134\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0a0e458-1847-406d-b468-14c507795012","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f800acd2-9438-45e4-a48e-ff8771365de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"09342894-00dc-43b6-8198-2d6ef8631146","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-412134 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-412134 --output=json --layout=cluster: exit status 7 (282.983043ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-412134","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-412134","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:02:33.622565  157320 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-412134" does not appear in /home/jenkins/minikube-integration/21504-2314/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-412134 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-412134 --output=json --layout=cluster: exit status 7 (274.222006ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-412134","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-412134","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:02:33.894501  157383 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-412134" does not appear in /home/jenkins/minikube-integration/21504-2314/kubeconfig
	E0908 14:02:33.905772  157383 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/insufficient-storage-412134/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-412134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-412134
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-412134: (1.869285237s)
--- PASS: TestInsufficientStorage (9.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.410934128 start -p running-upgrade-841851 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.410934128 start -p running-upgrade-841851 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.76044859s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-841851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-841851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.062880491s)
helpers_test.go:175: Cleaning up "running-upgrade-841851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-841851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-841851: (2.245717865s)
--- PASS: TestRunningBinaryUpgrade (67.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (206.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.82905397s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-333796
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-333796: (1.238584356s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-333796 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-333796 status --format={{.Host}}: exit status 7 (67.151377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (2m23.022965221s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-333796 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (112.537819ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-333796] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-333796
	    minikube start -p kubernetes-upgrade-333796 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3337962 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-333796 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-333796 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.974073347s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-333796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-333796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-333796: (4.625142216s)
--- PASS: TestKubernetesUpgrade (206.98s)

                                                
                                    
x
+
TestMissingContainerUpgrade (133.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3444958391 start -p missing-upgrade-451426 --memory=3072 --driver=docker  --container-runtime=containerd
E0908 14:03:04.506762    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3444958391 start -p missing-upgrade-451426 --memory=3072 --driver=docker  --container-runtime=containerd: (1m6.996237095s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-451426
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-451426
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-451426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-451426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.147146925s)
helpers_test.go:175: Cleaning up "missing-upgrade-451426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-451426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-451426: (3.025981248s)
--- PASS: TestMissingContainerUpgrade (133.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-930135 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-930135 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (99.920808ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-930135] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-930135 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-930135 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.891557118s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-930135 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-930135 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-930135 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.334398845s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-930135 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-930135 status -o json: exit status 2 (477.322629ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-930135","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-930135
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-930135: (2.54982793s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-930135 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-930135 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.940205085s)
--- PASS: TestNoKubernetes/serial/Start (8.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-930135 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-930135 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.468013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-930135
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-930135: (1.197909615s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-930135 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-930135 --driver=docker  --container-runtime=containerd: (6.023914943s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-930135 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-930135 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.632872ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.972728454 start -p stopped-upgrade-315126 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.972728454 start -p stopped-upgrade-315126 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (38.799994527s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.972728454 -p stopped-upgrade-315126 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.972728454 -p stopped-upgrade-315126 stop: (1.231094263s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-315126 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0908 14:05:55.167130    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-315126 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.695097891s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-315126
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-315126: (1.704898089s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.70s)

                                                
                                    
x
+
TestPause/serial/Start (91.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-521388 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-521388 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m31.452687626s)
--- PASS: TestPause/serial/Start (91.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-175909 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-175909 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (194.447758ms)

                                                
                                                
-- stdout --
	* [false-175909] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:08:12.478715  191735 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:08:12.478888  191735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:08:12.478901  191735 out.go:374] Setting ErrFile to fd 2...
	I0908 14:08:12.478907  191735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:08:12.479724  191735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2314/.minikube/bin
	I0908 14:08:12.480201  191735 out.go:368] Setting JSON to false
	I0908 14:08:12.481178  191735 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3043,"bootTime":1757337450,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0908 14:08:12.481281  191735 start.go:140] virtualization:  
	I0908 14:08:12.485031  191735 out.go:179] * [false-175909] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 14:08:12.488782  191735 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 14:08:12.488929  191735 notify.go:220] Checking for updates...
	I0908 14:08:12.494431  191735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:08:12.497332  191735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-2314/kubeconfig
	I0908 14:08:12.500272  191735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2314/.minikube
	I0908 14:08:12.503222  191735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:08:12.506112  191735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:08:12.509443  191735 config.go:182] Loaded profile config "pause-521388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0908 14:08:12.509574  191735 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:08:12.546051  191735 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:08:12.546171  191735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:08:12.604645  191735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 14:08:12.595460353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:08:12.604753  191735 docker.go:318] overlay module found
	I0908 14:08:12.607826  191735 out.go:179] * Using the docker driver based on user configuration
	I0908 14:08:12.610656  191735 start.go:304] selected driver: docker
	I0908 14:08:12.610676  191735 start.go:918] validating driver "docker" against <nil>
	I0908 14:08:12.610691  191735 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:08:12.614204  191735 out.go:203] 
	W0908 14:08:12.617126  191735 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0908 14:08:12.619915  191735 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-175909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-175909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-2314/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-521388
contexts:
- context:
cluster: pause-521388
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-521388
name: pause-521388
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-521388
user:
client-certificate: /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/pause-521388/client.crt
client-key: /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/pause-521388/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-175909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-175909"

                                                
                                                
----------------------- debugLogs end: false-175909 [took: 3.407867351s] --------------------------------
helpers_test.go:175: Cleaning up "false-175909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-175909
--- PASS: TestNetworkPlugins/group/false (3.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-521388 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-521388 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.005114671s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-521388 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-521388 --output=json --layout=cluster
E0908 14:08:58.239748    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-521388 --output=json --layout=cluster: exit status 2 (433.422575ms)

                                                
                                                
-- stdout --
	{"Name":"pause-521388","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-521388","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-521388 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.38s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-521388 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-521388 --alsologtostderr -v=5: (1.376877753s)
--- PASS: TestPause/serial/PauseAgain (1.38s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-521388 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-521388 --alsologtostderr -v=5: (3.034758802s)
--- PASS: TestPause/serial/DeletePaused (3.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-521388
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-521388: exit status 1 (46.600005ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-521388: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-043789 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-043789 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m1.257636474s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-043789 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [14ad73ca-f9e3-4a2d-8f16-80a9e6f73c53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [14ad73ca-f9e3-4a2d-8f16-80a9e6f73c53] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003144953s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-043789 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-043789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-043789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.154765152s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-043789 describe deploy/metrics-server -n kube-system
E0908 14:10:55.167005    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-043789 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-043789 --alsologtostderr -v=3: (11.981884589s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-043789 -n old-k8s-version-043789
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-043789 -n old-k8s-version-043789: exit status 7 (82.838647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-043789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-043789 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-043789 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.907951091s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-043789 -n old-k8s-version-043789
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dqqvj" [e424c6fe-cb0a-4432-b615-4f2018d00536] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003962259s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dqqvj" [e424c6fe-cb0a-4432-b615-4f2018d00536] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003776071s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-043789 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-043789 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-043789 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-043789 -n old-k8s-version-043789
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-043789 -n old-k8s-version-043789: exit status 2 (331.068857ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-043789 -n old-k8s-version-043789
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-043789 -n old-k8s-version-043789: exit status 2 (323.780893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-043789 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-043789 -n old-k8s-version-043789
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-043789 -n old-k8s-version-043789
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-925401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-925401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m20.079587323s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (103.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-825303 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 14:13:04.506994    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-825303 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m43.290915191s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (103.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-925401 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [54c20e9f-511b-41b6-9946-aabeb8b295ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [54c20e9f-511b-41b6-9946-aabeb8b295ea] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00308144s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-925401 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-925401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-925401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.197661114s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-925401 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-925401 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-925401 --alsologtostderr -v=3: (12.06630261s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-925401 -n no-preload-925401
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-925401 -n no-preload-925401: exit status 7 (76.732103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-925401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-925401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-925401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (54.022064792s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-925401 -n no-preload-925401
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-825303 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9454737e-1478-4386-81f0-815add165fe7] Pending
helpers_test.go:352: "busybox" [9454737e-1478-4386-81f0-815add165fe7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9454737e-1478-4386-81f0-815add165fe7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005285382s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-825303 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-825303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-825303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.056751462s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-825303 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-825303 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-825303 --alsologtostderr -v=3: (12.097264127s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vmbs6" [50d8c160-519e-4a76-9199-aff86492f246] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003833508s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825303 -n embed-certs-825303
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825303 -n embed-certs-825303: exit status 7 (75.344624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-825303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-825303 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-825303 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (54.182546551s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825303 -n embed-certs-825303
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vmbs6" [50d8c160-519e-4a76-9199-aff86492f246] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004319715s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-925401 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-925401 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-925401 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-925401 --alsologtostderr -v=1: (1.204368445s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-925401 -n no-preload-925401
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-925401 -n no-preload-925401: exit status 2 (441.249358ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-925401 -n no-preload-925401
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-925401 -n no-preload-925401: exit status 2 (404.723731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-925401 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-925401 --alsologtostderr -v=1: (1.033304524s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-925401 -n no-preload-925401
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-925401 -n no-preload-925401
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-337423 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 14:15:44.670969    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:44.677325    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:44.688676    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:44.710063    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:44.751343    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:44.832681    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:44.994066    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:45.316444    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:45.957750    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:47.239692    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-337423 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m33.1673597s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6fmbw" [b67e8293-4afe-4f87-a92c-460421afdf28] Running
E0908 14:15:49.801933    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003742755s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6fmbw" [b67e8293-4afe-4f87-a92c-460421afdf28] Running
E0908 14:15:54.923636    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:55.167341    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/addons-073153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003340051s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-825303 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-825303 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-825303 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825303 -n embed-certs-825303
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825303 -n embed-certs-825303: exit status 2 (318.203482ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-825303 -n embed-certs-825303
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-825303 -n embed-certs-825303: exit status 2 (322.144967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-825303 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825303 -n embed-certs-825303
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-825303 -n embed-certs-825303
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-944016 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0908 14:16:07.576252    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:16:25.647157    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-944016 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (38.996098167s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-944016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-944016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.286020861s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-337423 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0c421302-3af0-4042-9f2f-3e547f1a0379] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0c421302-3af0-4042-9f2f-3e547f1a0379] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003930612s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-337423 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-944016 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-944016 --alsologtostderr -v=3: (1.242122606s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-944016 -n newest-cni-944016
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-944016 -n newest-cni-944016: exit status 7 (80.389817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-944016 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-944016 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-944016 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (18.144813723s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-944016 -n newest-cni-944016
E0908 14:17:06.609649    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-337423 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-337423 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.532620679s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-337423 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-337423 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-337423 --alsologtostderr -v=3: (12.381984888s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-944016 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-944016 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-944016 -n newest-cni-944016
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-944016 -n newest-cni-944016: exit status 2 (324.666821ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-944016 -n newest-cni-944016
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-944016 -n newest-cni-944016: exit status 2 (324.350617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-944016 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-944016 -n newest-cni-944016
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-944016 -n newest-cni-944016
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423: exit status 7 (90.605517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-337423 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (29.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-337423 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-337423 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (28.987119366s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (29.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m5.882092795s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tp4cd" [e615df7b-6d48-4d4f-8d8e-29b912c6e862] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tp4cd" [e615df7b-6d48-4d4f-8d8e-29b912c6e862] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004125629s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tp4cd" [e615df7b-6d48-4d4f-8d8e-29b912c6e862] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004446464s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-337423 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-337423 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-337423 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-337423 --alsologtostderr -v=1: (1.081025459s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423: exit status 2 (352.223144ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423: exit status 2 (360.211165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-337423 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-337423 -n default-k8s-diff-port-337423
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)
E0908 14:24:00.788896    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:24:03.189718    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/no-preload-925401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0908 14:18:04.506818    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m30.492359963s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-175909 "pgrep -a kubelet"
I0908 14:18:19.498389    4118 config.go:182] Loaded profile config "auto-175909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-175909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9fvkd" [cd8a0377-0db8-4563-8cae-4c0b5b4c9e43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9fvkd" [cd8a0377-0db8-4563-8cae-4c0b5b4c9e43] Running
E0908 14:18:28.531073    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003650968s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-175909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0908 14:18:55.981561    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/no-preload-925401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:19:16.463800    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/no-preload-925401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m22.644273179s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-cvbws" [a2a7ce44-2c7b-4ecf-b4e9-a559ece710ce] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003840796s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-175909 "pgrep -a kubelet"
I0908 14:19:35.713663    4118 config.go:182] Loaded profile config "kindnet-175909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-175909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sdnj9" [f43a6b57-0093-4f7a-85d0-f43a8eb3874d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sdnj9" [f43a6b57-0093-4f7a-85d0-f43a8eb3874d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003325886s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-175909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m26.577922s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7vcfz" [c94c08c4-1d3b-4e7f-9b8e-d53c49e38245] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00333881s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-175909 "pgrep -a kubelet"
I0908 14:20:21.247132    4118 config.go:182] Loaded profile config "flannel-175909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-175909 replace --force -f testdata/netcat-deployment.yaml
I0908 14:20:21.563228    4118 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8xr98" [5e7bf991-c8d2-4517-bf6d-49e33a18fac7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8xr98" [5e7bf991-c8d2-4517-bf6d-49e33a18fac7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.002496717s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-175909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0908 14:21:12.373376    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/old-k8s-version-043789/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:21:19.347706    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/no-preload-925401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.651612289s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-175909 "pgrep -a kubelet"
I0908 14:21:33.796342    4118 config.go:182] Loaded profile config "enable-default-cni-175909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-175909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2wg5l" [b381c18e-0baa-4503-a7b8-178656fb4370] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2wg5l" [b381c18e-0baa-4503-a7b8-178656fb4370] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003717603s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-175909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-175909 "pgrep -a kubelet"
I0908 14:21:50.963079    4118 config.go:182] Loaded profile config "custom-flannel-175909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-175909 replace --force -f testdata/netcat-deployment.yaml
E0908 14:21:51.148182    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/default-k8s-diff-port-337423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mxqc4" [657fa780-7397-48ba-8bc6-2d94eb1f8e63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:21:56.270429    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/default-k8s-diff-port-337423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mxqc4" [657fa780-7397-48ba-8bc6-2d94eb1f8e63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004388705s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-175909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.939032917s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0908 14:23:04.506715    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/functional-478007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:07.955855    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/default-k8s-diff-port-337423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:19.812847    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:19.819232    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:19.830810    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:19.852297    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:19.893878    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:19.975424    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:20.137008    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:20.458431    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:21.100435    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:22.382158    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-175909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.413089173s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-175909 "pgrep -a kubelet"
I0908 14:23:22.973711    4118 config.go:182] Loaded profile config "bridge-175909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-175909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4nxj8" [f41dfe4c-6629-4d18-8edf-399fbc4a99be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:23:24.943657    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4nxj8" [f41dfe4c-6629-4d18-8edf-399fbc4a99be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004936341s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-wp78c" [2e9952bf-a6f1-453f-922a-81c0fdf559e6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0908 14:23:30.065388    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-wp78c" [2e9952bf-a6f1-453f-922a-81c0fdf559e6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003542622s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-175909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-175909 "pgrep -a kubelet"
I0908 14:23:34.994730    4118 config.go:182] Loaded profile config "calico-175909": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-175909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qm6w9" [ef577a59-90a3-41d9-a516-864dd4172bbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:23:35.483471    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/no-preload-925401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qm6w9" [ef577a59-90a3-41d9-a516-864dd4172bbd] Running
E0908 14:23:40.307177    4118 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/auto-175909/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005050497s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-175909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-175909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    

Test skip (30/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.7s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-878084 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-878084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-878084
--- SKIP: TestDownloadOnlyKic (0.70s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-300358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-300358
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-175909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-175909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-2314/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-521388
contexts:
- context:
cluster: pause-521388
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-521388
name: pause-521388
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-521388
user:
client-certificate: /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/pause-521388/client.crt
client-key: /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/pause-521388/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-175909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-175909"

                                                
                                                
----------------------- debugLogs end: kubenet-175909 [took: 3.327749495s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-175909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-175909
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-175909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-175909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-2314/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-521388
contexts:
- context:
cluster: pause-521388
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-521388
name: pause-521388
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-521388
user:
client-certificate: /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/pause-521388/client.crt
client-key: /home/jenkins/minikube-integration/21504-2314/.minikube/profiles/pause-521388/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-175909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-175909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-175909"

                                                
                                                
----------------------- debugLogs end: cilium-175909 [took: 3.747000756s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-175909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-175909
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
Copied to clipboard