Test Report: Docker_Linux_containerd_arm64 21790

                    
                      0500345ed58569c501f3381e2b1a5a0e0bac6bd7:2025-10-27:42095
                    
                

Test fail (1/332)

Order failed test Duration
250 TestScheduledStopUnix 36.43
x
+
TestScheduledStopUnix (36.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-727712 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-727712 --memory=3072 --driver=docker  --container-runtime=containerd: (31.352237281s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-727712 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-727712 -n scheduled-stop-727712
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-727712 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 420762 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-10-27 22:52:08.577502959 +0000 UTC m=+2245.522959878
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-727712
helpers_test.go:243: (dbg) docker inspect scheduled-stop-727712:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "26f4adc48ea9e1d0d2328b79342904f7d44dc4199732de8eb586b348570410db",
	        "Created": "2025-10-27T22:51:42.162840997Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 418781,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:51:42.250549814Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/26f4adc48ea9e1d0d2328b79342904f7d44dc4199732de8eb586b348570410db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/26f4adc48ea9e1d0d2328b79342904f7d44dc4199732de8eb586b348570410db/hostname",
	        "HostsPath": "/var/lib/docker/containers/26f4adc48ea9e1d0d2328b79342904f7d44dc4199732de8eb586b348570410db/hosts",
	        "LogPath": "/var/lib/docker/containers/26f4adc48ea9e1d0d2328b79342904f7d44dc4199732de8eb586b348570410db/26f4adc48ea9e1d0d2328b79342904f7d44dc4199732de8eb586b348570410db-json.log",
	        "Name": "/scheduled-stop-727712",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "scheduled-stop-727712:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-727712",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "26f4adc48ea9e1d0d2328b79342904f7d44dc4199732de8eb586b348570410db",
	                "LowerDir": "/var/lib/docker/overlay2/b8348dfa9ed4e8656da24a2aaac9c6325141ec8ade63a22644060cbc120292fd-init/diff:/var/lib/docker/overlay2/71868b2d6b922761474a3006f56cee03abbe2c6fed1e66f903ecd8890c7d8e07/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8348dfa9ed4e8656da24a2aaac9c6325141ec8ade63a22644060cbc120292fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8348dfa9ed4e8656da24a2aaac9c6325141ec8ade63a22644060cbc120292fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8348dfa9ed4e8656da24a2aaac9c6325141ec8ade63a22644060cbc120292fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-727712",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-727712/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-727712",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-727712",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-727712",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c0d2aa65e191f3c6347049c4066147e7d16fb80f518a217ee64d23a3a2f664a",
	            "SandboxKey": "/var/run/docker/netns/3c0d2aa65e19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33339"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33338"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-727712": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:22:b6:35:a3:5c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c75eda9a6d12aa9214752d938b427744a6950652a79bd3d7d0a7f2b1f810837f",
	                    "EndpointID": "842f200da634f460d7e43bbb76ae226b4c9979432837700a0fb10a4961dea540",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-727712",
	                        "26f4adc48ea9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-727712 -n scheduled-stop-727712
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-727712 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-727712 logs -n 25: (1.178102477s)
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-889386                                                                                                                                             │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:46 UTC │ 27 Oct 25 22:46 UTC │
	│ start   │ -p multinode-889386 --wait=true -v=5 --alsologtostderr                                                                                                          │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:46 UTC │ 27 Oct 25 22:47 UTC │
	│ node    │ list -p multinode-889386                                                                                                                                        │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:47 UTC │                     │
	│ node    │ multinode-889386 node delete m03                                                                                                                                │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:47 UTC │ 27 Oct 25 22:47 UTC │
	│ stop    │ multinode-889386 stop                                                                                                                                           │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:47 UTC │ 27 Oct 25 22:47 UTC │
	│ start   │ -p multinode-889386 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd                                                          │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │ 27 Oct 25 22:48 UTC │
	│ node    │ list -p multinode-889386                                                                                                                                        │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ start   │ -p multinode-889386-m02 --driver=docker  --container-runtime=containerd                                                                                         │ multinode-889386-m02  │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ start   │ -p multinode-889386-m03 --driver=docker  --container-runtime=containerd                                                                                         │ multinode-889386-m03  │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │ 27 Oct 25 22:49 UTC │
	│ node    │ add -p multinode-889386                                                                                                                                         │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:49 UTC │                     │
	│ delete  │ -p multinode-889386-m03                                                                                                                                         │ multinode-889386-m03  │ jenkins │ v1.37.0 │ 27 Oct 25 22:49 UTC │ 27 Oct 25 22:49 UTC │
	│ delete  │ -p multinode-889386                                                                                                                                             │ multinode-889386      │ jenkins │ v1.37.0 │ 27 Oct 25 22:49 UTC │ 27 Oct 25 22:49 UTC │
	│ start   │ -p test-preload-289848 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0 │ test-preload-289848   │ jenkins │ v1.37.0 │ 27 Oct 25 22:49 UTC │ 27 Oct 25 22:50 UTC │
	│ image   │ test-preload-289848 image pull gcr.io/k8s-minikube/busybox                                                                                                      │ test-preload-289848   │ jenkins │ v1.37.0 │ 27 Oct 25 22:50 UTC │ 27 Oct 25 22:50 UTC │
	│ stop    │ -p test-preload-289848                                                                                                                                          │ test-preload-289848   │ jenkins │ v1.37.0 │ 27 Oct 25 22:50 UTC │ 27 Oct 25 22:50 UTC │
	│ start   │ -p test-preload-289848 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd                                         │ test-preload-289848   │ jenkins │ v1.37.0 │ 27 Oct 25 22:50 UTC │ 27 Oct 25 22:51 UTC │
	│ image   │ test-preload-289848 image list                                                                                                                                  │ test-preload-289848   │ jenkins │ v1.37.0 │ 27 Oct 25 22:51 UTC │ 27 Oct 25 22:51 UTC │
	│ delete  │ -p test-preload-289848                                                                                                                                          │ test-preload-289848   │ jenkins │ v1.37.0 │ 27 Oct 25 22:51 UTC │ 27 Oct 25 22:51 UTC │
	│ start   │ -p scheduled-stop-727712 --memory=3072 --driver=docker  --container-runtime=containerd                                                                          │ scheduled-stop-727712 │ jenkins │ v1.37.0 │ 27 Oct 25 22:51 UTC │ 27 Oct 25 22:52 UTC │
	│ stop    │ -p scheduled-stop-727712 --schedule 5m                                                                                                                          │ scheduled-stop-727712 │ jenkins │ v1.37.0 │ 27 Oct 25 22:52 UTC │                     │
	│ stop    │ -p scheduled-stop-727712 --schedule 5m                                                                                                                          │ scheduled-stop-727712 │ jenkins │ v1.37.0 │ 27 Oct 25 22:52 UTC │                     │
	│ stop    │ -p scheduled-stop-727712 --schedule 5m                                                                                                                          │ scheduled-stop-727712 │ jenkins │ v1.37.0 │ 27 Oct 25 22:52 UTC │                     │
	│ stop    │ -p scheduled-stop-727712 --schedule 15s                                                                                                                         │ scheduled-stop-727712 │ jenkins │ v1.37.0 │ 27 Oct 25 22:52 UTC │                     │
	│ stop    │ -p scheduled-stop-727712 --schedule 15s                                                                                                                         │ scheduled-stop-727712 │ jenkins │ v1.37.0 │ 27 Oct 25 22:52 UTC │                     │
	│ stop    │ -p scheduled-stop-727712 --schedule 15s                                                                                                                         │ scheduled-stop-727712 │ jenkins │ v1.37.0 │ 27 Oct 25 22:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:51:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:51:36.745133  418392 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:51:36.745444  418392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:51:36.745448  418392 out.go:374] Setting ErrFile to fd 2...
	I1027 22:51:36.745452  418392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:51:36.745750  418392 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:51:36.746171  418392 out.go:368] Setting JSON to false
	I1027 22:51:36.747016  418392 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9247,"bootTime":1761596250,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1027 22:51:36.747092  418392 start.go:143] virtualization:  
	I1027 22:51:36.750884  418392 out.go:179] * [scheduled-stop-727712] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:51:36.755320  418392 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:51:36.755396  418392 notify.go:221] Checking for updates...
	I1027 22:51:36.762056  418392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:51:36.765171  418392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	I1027 22:51:36.768529  418392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	I1027 22:51:36.771589  418392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:51:36.774795  418392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:51:36.777915  418392 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:51:36.809668  418392 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:51:36.809788  418392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:51:36.867316  418392 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-27 22:51:36.858224211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:51:36.867407  418392 docker.go:318] overlay module found
	I1027 22:51:36.872614  418392 out.go:179] * Using the docker driver based on user configuration
	I1027 22:51:36.875582  418392 start.go:307] selected driver: docker
	I1027 22:51:36.875593  418392 start.go:928] validating driver "docker" against <nil>
	I1027 22:51:36.875613  418392 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:51:36.876328  418392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:51:36.930638  418392 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-27 22:51:36.921817383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:51:36.930786  418392 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:51:36.931031  418392 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:51:36.934018  418392 out.go:179] * Using Docker driver with root privileges
	I1027 22:51:36.936905  418392 cni.go:84] Creating CNI manager for ""
	I1027 22:51:36.936973  418392 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1027 22:51:36.936987  418392 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:51:36.937076  418392 start.go:351] cluster config:
	{Name:scheduled-stop-727712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-727712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:51:36.940227  418392 out.go:179] * Starting "scheduled-stop-727712" primary control-plane node in "scheduled-stop-727712" cluster
	I1027 22:51:36.943090  418392 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1027 22:51:36.946082  418392 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:51:36.949005  418392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1027 22:51:36.949079  418392 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1027 22:51:36.949088  418392 cache.go:59] Caching tarball of preloaded images
	I1027 22:51:36.949094  418392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:51:36.949183  418392 preload.go:233] Found /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1027 22:51:36.949193  418392 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1027 22:51:36.949526  418392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/config.json ...
	I1027 22:51:36.949544  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/config.json: {Name:mk8a2f8f715fa6845c4839e4a3e5384abd0495c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:36.968993  418392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:51:36.969005  418392 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:51:36.969024  418392 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:51:36.969046  418392 start.go:360] acquireMachinesLock for scheduled-stop-727712: {Name:mke1bc6c810737b084be64d6ad0575eea26ff494 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:51:36.969157  418392 start.go:364] duration metric: took 97.306µs to acquireMachinesLock for "scheduled-stop-727712"
	I1027 22:51:36.969181  418392 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-727712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-727712 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1027 22:51:36.969245  418392 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:51:36.974514  418392 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:51:36.974754  418392 start.go:159] libmachine.API.Create for "scheduled-stop-727712" (driver="docker")
	I1027 22:51:36.974788  418392 client.go:173] LocalClient.Create starting
	I1027 22:51:36.974888  418392 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca.pem
	I1027 22:51:36.974927  418392 main.go:143] libmachine: Decoding PEM data...
	I1027 22:51:36.974939  418392 main.go:143] libmachine: Parsing certificate...
	I1027 22:51:36.975009  418392 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-269600/.minikube/certs/cert.pem
	I1027 22:51:36.975029  418392 main.go:143] libmachine: Decoding PEM data...
	I1027 22:51:36.975038  418392 main.go:143] libmachine: Parsing certificate...
	I1027 22:51:36.975407  418392 cli_runner.go:164] Run: docker network inspect scheduled-stop-727712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:51:36.990876  418392 cli_runner.go:211] docker network inspect scheduled-stop-727712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:51:36.990980  418392 network_create.go:284] running [docker network inspect scheduled-stop-727712] to gather additional debugging logs...
	I1027 22:51:36.990997  418392 cli_runner.go:164] Run: docker network inspect scheduled-stop-727712
	W1027 22:51:37.009096  418392 cli_runner.go:211] docker network inspect scheduled-stop-727712 returned with exit code 1
	I1027 22:51:37.009123  418392 network_create.go:287] error running [docker network inspect scheduled-stop-727712]: docker network inspect scheduled-stop-727712: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-727712 not found
	I1027 22:51:37.009144  418392 network_create.go:289] output of [docker network inspect scheduled-stop-727712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-727712 not found
	
	** /stderr **
	I1027 22:51:37.009271  418392 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:51:37.028989  418392 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-743a90b7240a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:c0:8c:48:2b:2c} reservation:<nil>}
	I1027 22:51:37.029275  418392 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-eb21287fdf9a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:a0:7e:f1:d9:d6} reservation:<nil>}
	I1027 22:51:37.029471  418392 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-176a96de0236 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5a:24:d0:29:4c:9c} reservation:<nil>}
	I1027 22:51:37.029821  418392 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fa0b0}
	I1027 22:51:37.029837  418392 network_create.go:124] attempt to create docker network scheduled-stop-727712 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 22:51:37.029898  418392 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-727712 scheduled-stop-727712
	I1027 22:51:37.087778  418392 network_create.go:108] docker network scheduled-stop-727712 192.168.76.0/24 created
	I1027 22:51:37.087814  418392 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-727712" container
	I1027 22:51:37.087886  418392 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:51:37.103631  418392 cli_runner.go:164] Run: docker volume create scheduled-stop-727712 --label name.minikube.sigs.k8s.io=scheduled-stop-727712 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:51:37.120734  418392 oci.go:103] Successfully created a docker volume scheduled-stop-727712
	I1027 22:51:37.120891  418392 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-727712-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-727712 --entrypoint /usr/bin/test -v scheduled-stop-727712:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:51:37.663990  418392 oci.go:107] Successfully prepared a docker volume scheduled-stop-727712
	I1027 22:51:37.664037  418392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1027 22:51:37.664055  418392 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:51:37.664119  418392 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-727712:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:51:42.070799  418392 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-727712:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.406633691s)
	I1027 22:51:42.070822  418392 kic.go:203] duration metric: took 4.40676394s to extract preloaded images to volume ...
	W1027 22:51:42.071027  418392 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 22:51:42.071142  418392 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:51:42.139867  418392 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-727712 --name scheduled-stop-727712 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-727712 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-727712 --network scheduled-stop-727712 --ip 192.168.76.2 --volume scheduled-stop-727712:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:51:42.486611  418392 cli_runner.go:164] Run: docker container inspect scheduled-stop-727712 --format={{.State.Running}}
	I1027 22:51:42.513114  418392 cli_runner.go:164] Run: docker container inspect scheduled-stop-727712 --format={{.State.Status}}
	I1027 22:51:42.536920  418392 cli_runner.go:164] Run: docker exec scheduled-stop-727712 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:51:42.592307  418392 oci.go:144] the created container "scheduled-stop-727712" has a running status.
	I1027 22:51:42.592338  418392 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa...
	I1027 22:51:43.091887  418392 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:51:43.120108  418392 cli_runner.go:164] Run: docker container inspect scheduled-stop-727712 --format={{.State.Status}}
	I1027 22:51:43.149865  418392 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:51:43.149881  418392 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-727712 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:51:43.213797  418392 cli_runner.go:164] Run: docker container inspect scheduled-stop-727712 --format={{.State.Status}}
	I1027 22:51:43.236133  418392 machine.go:94] provisionDockerMachine start ...
	I1027 22:51:43.236215  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:43.262558  418392 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:43.262908  418392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33335 <nil> <nil>}
	I1027 22:51:43.262922  418392 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:51:43.432725  418392 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-727712
	
	I1027 22:51:43.432740  418392 ubuntu.go:182] provisioning hostname "scheduled-stop-727712"
	I1027 22:51:43.432906  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:43.452521  418392 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:43.452893  418392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33335 <nil> <nil>}
	I1027 22:51:43.452903  418392 main.go:143] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-727712 && echo "scheduled-stop-727712" | sudo tee /etc/hostname
	I1027 22:51:43.616025  418392 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-727712
	
	I1027 22:51:43.616095  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:43.637377  418392 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:43.637691  418392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33335 <nil> <nil>}
	I1027 22:51:43.637706  418392 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-727712' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-727712/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-727712' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:51:43.800869  418392 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:51:43.800886  418392 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-269600/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-269600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-269600/.minikube}
	I1027 22:51:43.800913  418392 ubuntu.go:190] setting up certificates
	I1027 22:51:43.800921  418392 provision.go:84] configureAuth start
	I1027 22:51:43.800986  418392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-727712
	I1027 22:51:43.818394  418392 provision.go:143] copyHostCerts
	I1027 22:51:43.818453  418392 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-269600/.minikube/ca.pem, removing ...
	I1027 22:51:43.818461  418392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-269600/.minikube/ca.pem
	I1027 22:51:43.818537  418392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-269600/.minikube/ca.pem (1078 bytes)
	I1027 22:51:43.818620  418392 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-269600/.minikube/cert.pem, removing ...
	I1027 22:51:43.818624  418392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-269600/.minikube/cert.pem
	I1027 22:51:43.818652  418392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-269600/.minikube/cert.pem (1123 bytes)
	I1027 22:51:43.818699  418392 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-269600/.minikube/key.pem, removing ...
	I1027 22:51:43.818702  418392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-269600/.minikube/key.pem
	I1027 22:51:43.818724  418392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-269600/.minikube/key.pem (1679 bytes)
	I1027 22:51:43.818765  418392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-269600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-727712 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-727712]
	I1027 22:51:44.583695  418392 provision.go:177] copyRemoteCerts
	I1027 22:51:44.583755  418392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:51:44.583794  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:44.601212  418392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33335 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa Username:docker}
	I1027 22:51:44.704983  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:51:44.722397  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1027 22:51:44.739802  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:51:44.758321  418392 provision.go:87] duration metric: took 957.370596ms to configureAuth
	I1027 22:51:44.758339  418392 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:51:44.758537  418392 config.go:182] Loaded profile config "scheduled-stop-727712": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:51:44.758543  418392 machine.go:97] duration metric: took 1.522400211s to provisionDockerMachine
	I1027 22:51:44.758548  418392 client.go:176] duration metric: took 7.783755038s to LocalClient.Create
	I1027 22:51:44.758560  418392 start.go:167] duration metric: took 7.783808996s to libmachine.API.Create "scheduled-stop-727712"
	I1027 22:51:44.758565  418392 start.go:293] postStartSetup for "scheduled-stop-727712" (driver="docker")
	I1027 22:51:44.758574  418392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:51:44.758620  418392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:51:44.758663  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:44.775447  418392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33335 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa Username:docker}
	I1027 22:51:44.881684  418392 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:51:44.884801  418392 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:51:44.884820  418392 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:51:44.884829  418392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-269600/.minikube/addons for local assets ...
	I1027 22:51:44.884884  418392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-269600/.minikube/files for local assets ...
	I1027 22:51:44.884966  418392 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-269600/.minikube/files/etc/ssl/certs/2714482.pem -> 2714482.pem in /etc/ssl/certs
	I1027 22:51:44.885071  418392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:51:44.892228  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/files/etc/ssl/certs/2714482.pem --> /etc/ssl/certs/2714482.pem (1708 bytes)
	I1027 22:51:44.908898  418392 start.go:296] duration metric: took 150.319381ms for postStartSetup
	I1027 22:51:44.909264  418392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-727712
	I1027 22:51:44.926000  418392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/config.json ...
	I1027 22:51:44.926284  418392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:51:44.926323  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:44.943017  418392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33335 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa Username:docker}
	I1027 22:51:45.066412  418392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:51:45.075235  418392 start.go:128] duration metric: took 8.105959431s to createHost
	I1027 22:51:45.075253  418392 start.go:83] releasing machines lock for "scheduled-stop-727712", held for 8.106089253s
	I1027 22:51:45.075356  418392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-727712
	I1027 22:51:45.104017  418392 ssh_runner.go:195] Run: cat /version.json
	I1027 22:51:45.104062  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:45.104329  418392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:51:45.104384  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:51:45.148890  418392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33335 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa Username:docker}
	I1027 22:51:45.152885  418392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33335 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa Username:docker}
	I1027 22:51:45.388001  418392 ssh_runner.go:195] Run: systemctl --version
	I1027 22:51:45.396994  418392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:51:45.402368  418392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:51:45.402440  418392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:51:45.446508  418392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 22:51:45.446528  418392 start.go:496] detecting cgroup driver to use...
	I1027 22:51:45.446610  418392 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 22:51:45.446734  418392 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1027 22:51:45.470301  418392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1027 22:51:45.485477  418392 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:51:45.485535  418392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:51:45.507248  418392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:51:45.526883  418392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:51:45.649095  418392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:51:45.774675  418392 docker.go:234] disabling docker service ...
	I1027 22:51:45.774733  418392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:51:45.797144  418392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:51:45.810708  418392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:51:45.943079  418392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:51:46.065714  418392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:51:46.079852  418392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:51:46.094965  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1027 22:51:46.104281  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1027 22:51:46.113671  418392 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1027 22:51:46.113732  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1027 22:51:46.122986  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1027 22:51:46.132076  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1027 22:51:46.141345  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1027 22:51:46.150232  418392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:51:46.158783  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1027 22:51:46.167609  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1027 22:51:46.176686  418392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1027 22:51:46.186807  418392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:51:46.195021  418392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:51:46.202581  418392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:51:46.319799  418392 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1027 22:51:46.465441  418392 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1027 22:51:46.465519  418392 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1027 22:51:46.469644  418392 start.go:564] Will wait 60s for crictl version
	I1027 22:51:46.469703  418392 ssh_runner.go:195] Run: which crictl
	I1027 22:51:46.473443  418392 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:51:46.508097  418392 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1027 22:51:46.508164  418392 ssh_runner.go:195] Run: containerd --version
	I1027 22:51:46.530755  418392 ssh_runner.go:195] Run: containerd --version
	I1027 22:51:46.562672  418392 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1027 22:51:46.565539  418392 cli_runner.go:164] Run: docker network inspect scheduled-stop-727712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:51:46.581324  418392 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:51:46.585002  418392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:51:46.594581  418392 kubeadm.go:884] updating cluster {Name:scheduled-stop-727712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-727712 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:51:46.594688  418392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1027 22:51:46.594743  418392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:51:46.619379  418392 containerd.go:627] all images are preloaded for containerd runtime.
	I1027 22:51:46.619392  418392 containerd.go:534] Images already preloaded, skipping extraction
	I1027 22:51:46.619451  418392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:51:46.644016  418392 containerd.go:627] all images are preloaded for containerd runtime.
	I1027 22:51:46.644028  418392 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:51:46.644034  418392 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1027 22:51:46.644170  418392 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-727712 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-727712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:51:46.644233  418392 ssh_runner.go:195] Run: sudo crictl info
	I1027 22:51:46.673899  418392 cni.go:84] Creating CNI manager for ""
	I1027 22:51:46.673909  418392 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1027 22:51:46.673934  418392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:51:46.673962  418392 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-727712 NodeName:scheduled-stop-727712 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:51:46.674075  418392 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "scheduled-stop-727712"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:51:46.674138  418392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:51:46.682047  418392 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:51:46.682108  418392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:51:46.690131  418392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1027 22:51:46.703604  418392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:51:46.716769  418392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1027 22:51:46.729941  418392 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:51:46.733723  418392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:51:46.743793  418392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:51:46.861840  418392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:51:46.877338  418392 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712 for IP: 192.168.76.2
	I1027 22:51:46.877350  418392 certs.go:195] generating shared ca certs ...
	I1027 22:51:46.877364  418392 certs.go:227] acquiring lock for ca certs: {Name:mk23112b7c069e590ec7058965e0532af7da3447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:46.877490  418392 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-269600/.minikube/ca.key
	I1027 22:51:46.877538  418392 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-269600/.minikube/proxy-client-ca.key
	I1027 22:51:46.877543  418392 certs.go:257] generating profile certs ...
	I1027 22:51:46.877600  418392 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/client.key
	I1027 22:51:46.877609  418392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/client.crt with IP's: []
	I1027 22:51:47.139642  418392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/client.crt ...
	I1027 22:51:47.139658  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/client.crt: {Name:mk02b1fe44fe7d263d295e27b35c25679f2129b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:47.139864  418392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/client.key ...
	I1027 22:51:47.139875  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/client.key: {Name:mk12d2db0690fcc29a3710dc2a09afac24477f49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:47.139966  418392 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.key.c5fdf043
	I1027 22:51:47.139992  418392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.crt.c5fdf043 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 22:51:47.404290  418392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.crt.c5fdf043 ...
	I1027 22:51:47.404311  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.crt.c5fdf043: {Name:mkcfcdd668803596f4fe7cf11d4d8db113b79b40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:47.404492  418392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.key.c5fdf043 ...
	I1027 22:51:47.404501  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.key.c5fdf043: {Name:mk5146e5bf103239dfe7fcafd830aba4e07f8279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:47.404585  418392 certs.go:382] copying /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.crt.c5fdf043 -> /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.crt
	I1027 22:51:47.404666  418392 certs.go:386] copying /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.key.c5fdf043 -> /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.key
	I1027 22:51:47.404718  418392 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.key
	I1027 22:51:47.404731  418392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.crt with IP's: []
	I1027 22:51:47.765438  418392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.crt ...
	I1027 22:51:47.765456  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.crt: {Name:mk3110cc948bc97ba66f86030060c484290252dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:47.765698  418392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.key ...
	I1027 22:51:47.765707  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.key: {Name:mka437a15c8009921bf85405f8bfba897937a727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:51:47.765918  418392 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/271448.pem (1338 bytes)
	W1027 22:51:47.765959  418392 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-269600/.minikube/certs/271448_empty.pem, impossibly tiny 0 bytes
	I1027 22:51:47.765966  418392 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 22:51:47.765989  418392 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:51:47.766011  418392 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:51:47.766034  418392 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-269600/.minikube/certs/key.pem (1679 bytes)
	I1027 22:51:47.766076  418392 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-269600/.minikube/files/etc/ssl/certs/2714482.pem (1708 bytes)
	I1027 22:51:47.766764  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:51:47.787636  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 22:51:47.806074  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:51:47.825368  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:51:47.843714  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 22:51:47.862718  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:51:47.881965  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:51:47.900056  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/scheduled-stop-727712/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:51:47.917613  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/certs/271448.pem --> /usr/share/ca-certificates/271448.pem (1338 bytes)
	I1027 22:51:47.935806  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/files/etc/ssl/certs/2714482.pem --> /usr/share/ca-certificates/2714482.pem (1708 bytes)
	I1027 22:51:47.953959  418392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-269600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:51:47.971871  418392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:51:47.985415  418392 ssh_runner.go:195] Run: openssl version
	I1027 22:51:47.991668  418392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/271448.pem && ln -fs /usr/share/ca-certificates/271448.pem /etc/ssl/certs/271448.pem"
	I1027 22:51:48.000184  418392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/271448.pem
	I1027 22:51:48.004755  418392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:22 /usr/share/ca-certificates/271448.pem
	I1027 22:51:48.004848  418392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/271448.pem
	I1027 22:51:48.051298  418392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/271448.pem /etc/ssl/certs/51391683.0"
	I1027 22:51:48.060362  418392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2714482.pem && ln -fs /usr/share/ca-certificates/2714482.pem /etc/ssl/certs/2714482.pem"
	I1027 22:51:48.069230  418392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2714482.pem
	I1027 22:51:48.073298  418392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:22 /usr/share/ca-certificates/2714482.pem
	I1027 22:51:48.073380  418392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2714482.pem
	I1027 22:51:48.115244  418392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2714482.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:51:48.123910  418392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:51:48.132180  418392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:51:48.136162  418392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:51:48.136222  418392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:51:48.177325  418392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:51:48.185550  418392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:51:48.188995  418392 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:51:48.189053  418392 kubeadm.go:401] StartCluster: {Name:scheduled-stop-727712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-727712 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:51:48.189128  418392 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1027 22:51:48.189192  418392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:51:48.215444  418392 cri.go:89] found id: ""
	I1027 22:51:48.215515  418392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:51:48.223638  418392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:51:48.231357  418392 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:51:48.231412  418392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:51:48.242543  418392 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:51:48.242553  418392 kubeadm.go:158] found existing configuration files:
	
	I1027 22:51:48.242605  418392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:51:48.250414  418392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:51:48.250469  418392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:51:48.258145  418392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:51:48.265631  418392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:51:48.265684  418392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:51:48.272858  418392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:51:48.281561  418392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:51:48.281614  418392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:51:48.288716  418392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:51:48.296311  418392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:51:48.296374  418392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:51:48.303828  418392 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:51:48.342700  418392 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:51:48.342857  418392 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:51:48.367738  418392 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:51:48.367801  418392 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 22:51:48.367834  418392 kubeadm.go:319] OS: Linux
	I1027 22:51:48.367879  418392 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:51:48.367926  418392 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 22:51:48.367973  418392 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:51:48.368020  418392 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:51:48.368067  418392 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:51:48.368114  418392 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:51:48.368159  418392 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:51:48.368206  418392 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:51:48.368251  418392 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 22:51:48.442152  418392 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:51:48.442260  418392 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:51:48.442355  418392 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:51:48.453175  418392 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:51:48.458405  418392 out.go:252]   - Generating certificates and keys ...
	I1027 22:51:48.458498  418392 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:51:48.458566  418392 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:51:49.363566  418392 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:51:50.243636  418392 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:51:50.425142  418392 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:51:51.022010  418392 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:51:51.479005  418392 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:51:51.479311  418392 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-727712] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:51:51.969067  418392 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:51:51.969410  418392 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-727712] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:51:52.212683  418392 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:51:52.316722  418392 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:51:53.189804  418392 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:51:53.190032  418392 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:51:53.931633  418392 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:51:55.000538  418392 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:51:55.390938  418392 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:51:55.777247  418392 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:51:56.376030  418392 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:51:56.376802  418392 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:51:56.379521  418392 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:51:56.383188  418392 out.go:252]   - Booting up control plane ...
	I1027 22:51:56.383289  418392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:51:56.383370  418392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:51:56.383437  418392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:51:56.399123  418392 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:51:56.399479  418392 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:51:56.406750  418392 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:51:56.407220  418392 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:51:56.407447  418392 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:51:56.537448  418392 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:51:56.537574  418392 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:51:58.033397  418392 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500990827s
	I1027 22:51:58.037110  418392 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:51:58.037196  418392 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 22:51:58.037284  418392 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:51:58.037543  418392 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:52:00.375579  418392 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.337901548s
	I1027 22:52:04.543669  418392 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.506508301s
	I1027 22:52:04.785138  418392 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.746451592s
	I1027 22:52:04.823849  418392 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:52:04.839863  418392 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:52:04.857746  418392 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:52:04.858176  418392 kubeadm.go:319] [mark-control-plane] Marking the node scheduled-stop-727712 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:52:04.872653  418392 kubeadm.go:319] [bootstrap-token] Using token: x3pj4l.awtike51qlcy8cjp
	I1027 22:52:04.875591  418392 out.go:252]   - Configuring RBAC rules ...
	I1027 22:52:04.875710  418392 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:52:04.882516  418392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:52:04.895074  418392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:52:04.901536  418392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:52:04.907372  418392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:52:04.911786  418392 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:52:05.193240  418392 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:52:05.626626  418392 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:52:06.192194  418392 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:52:06.193272  418392 kubeadm.go:319] 
	I1027 22:52:06.193351  418392 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:52:06.193355  418392 kubeadm.go:319] 
	I1027 22:52:06.193435  418392 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:52:06.193439  418392 kubeadm.go:319] 
	I1027 22:52:06.193463  418392 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:52:06.193524  418392 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:52:06.193577  418392 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:52:06.193580  418392 kubeadm.go:319] 
	I1027 22:52:06.193635  418392 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:52:06.193639  418392 kubeadm.go:319] 
	I1027 22:52:06.193688  418392 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:52:06.193691  418392 kubeadm.go:319] 
	I1027 22:52:06.193745  418392 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:52:06.193823  418392 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:52:06.193893  418392 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:52:06.193900  418392 kubeadm.go:319] 
	I1027 22:52:06.193987  418392 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:52:06.194066  418392 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:52:06.194070  418392 kubeadm.go:319] 
	I1027 22:52:06.194157  418392 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x3pj4l.awtike51qlcy8cjp \
	I1027 22:52:06.194264  418392 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e5bde37e868b0a9f20cc35703c4d8ced7fe96b47e180bf7d5d1b064d5adb88da \
	I1027 22:52:06.194289  418392 kubeadm.go:319] 	--control-plane 
	I1027 22:52:06.194292  418392 kubeadm.go:319] 
	I1027 22:52:06.194380  418392 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:52:06.194384  418392 kubeadm.go:319] 
	I1027 22:52:06.194468  418392 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x3pj4l.awtike51qlcy8cjp \
	I1027 22:52:06.194574  418392 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e5bde37e868b0a9f20cc35703c4d8ced7fe96b47e180bf7d5d1b064d5adb88da 
	I1027 22:52:06.198360  418392 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 22:52:06.198575  418392 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 22:52:06.198677  418392 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:52:06.198691  418392 cni.go:84] Creating CNI manager for ""
	I1027 22:52:06.198698  418392 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1027 22:52:06.201949  418392 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:52:06.204978  418392 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:52:06.209261  418392 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:52:06.209279  418392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:52:06.224963  418392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:52:06.525485  418392 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:52:06.525622  418392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:52:06.525719  418392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-727712 minikube.k8s.io/updated_at=2025_10_27T22_52_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=scheduled-stop-727712 minikube.k8s.io/primary=true
	I1027 22:52:06.542334  418392 ops.go:34] apiserver oom_adj: -16
	I1027 22:52:06.764731  418392 kubeadm.go:1114] duration metric: took 239.154383ms to wait for elevateKubeSystemPrivileges
	I1027 22:52:06.802055  418392 kubeadm.go:403] duration metric: took 18.613010618s to StartCluster
	I1027 22:52:06.802081  418392 settings.go:142] acquiring lock: {Name:mkf2704123c96ec6115f0b73542ebb274f80c701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:52:06.802144  418392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-269600/kubeconfig
	I1027 22:52:06.802781  418392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-269600/kubeconfig: {Name:mk0ee9c08e1ab37887a79c19b3bd04613966c4db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:52:06.802974  418392 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1027 22:52:06.803095  418392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 22:52:06.803323  418392 config.go:182] Loaded profile config "scheduled-stop-727712": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:52:06.803355  418392 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:52:06.803409  418392 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-727712"
	I1027 22:52:06.803422  418392 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-727712"
	I1027 22:52:06.803447  418392 host.go:66] Checking if "scheduled-stop-727712" exists ...
	I1027 22:52:06.803942  418392 cli_runner.go:164] Run: docker container inspect scheduled-stop-727712 --format={{.State.Status}}
	I1027 22:52:06.804363  418392 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-727712"
	I1027 22:52:06.804380  418392 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-727712"
	I1027 22:52:06.804661  418392 cli_runner.go:164] Run: docker container inspect scheduled-stop-727712 --format={{.State.Status}}
	I1027 22:52:06.809623  418392 out.go:179] * Verifying Kubernetes components...
	I1027 22:52:06.814874  418392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:52:06.834480  418392 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-727712"
	I1027 22:52:06.834507  418392 host.go:66] Checking if "scheduled-stop-727712" exists ...
	I1027 22:52:06.834973  418392 cli_runner.go:164] Run: docker container inspect scheduled-stop-727712 --format={{.State.Status}}
	I1027 22:52:06.845454  418392 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:52:06.850844  418392 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:52:06.850856  418392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:52:06.850923  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:52:06.862488  418392 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:52:06.862501  418392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:52:06.862571  418392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-727712
	I1027 22:52:06.880898  418392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33335 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa Username:docker}
	I1027 22:52:06.905114  418392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33335 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/scheduled-stop-727712/id_rsa Username:docker}
	I1027 22:52:07.048196  418392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 22:52:07.053142  418392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:52:07.080250  418392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:52:07.083438  418392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:52:07.454958  418392 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 22:52:07.456440  418392 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:52:07.456484  418392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:52:07.633932  418392 api_server.go:72] duration metric: took 830.895862ms to wait for apiserver process to appear ...
	I1027 22:52:07.633944  418392 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:52:07.633960  418392 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:52:07.636865  418392 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1027 22:52:07.639402  418392 addons.go:514] duration metric: took 836.023799ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1027 22:52:07.648979  418392 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 22:52:07.649972  418392 api_server.go:141] control plane version: v1.34.1
	I1027 22:52:07.649986  418392 api_server.go:131] duration metric: took 16.036948ms to wait for apiserver health ...
	I1027 22:52:07.649993  418392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:52:07.655379  418392 system_pods.go:59] 5 kube-system pods found
	I1027 22:52:07.655398  418392 system_pods.go:61] "etcd-scheduled-stop-727712" [b6f1c3d1-812d-42cb-8d23-03f8e4094eac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:52:07.655408  418392 system_pods.go:61] "kube-apiserver-scheduled-stop-727712" [9f0a2dab-bde2-4e87-bcb7-df3f075fa89d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:52:07.655417  418392 system_pods.go:61] "kube-controller-manager-scheduled-stop-727712" [41b46a48-3c24-4f79-9b00-8fd5ec9c0938] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:52:07.655423  418392 system_pods.go:61] "kube-scheduler-scheduled-stop-727712" [cb8f5226-4f50-4672-a78d-7eeb0b6e7436] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:52:07.655428  418392 system_pods.go:61] "storage-provisioner" [5b1a327d-3c03-46f9-bb3e-c2ca777134ea] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:52:07.655433  418392 system_pods.go:74] duration metric: took 5.434977ms to wait for pod list to return data ...
	I1027 22:52:07.655444  418392 kubeadm.go:587] duration metric: took 852.413146ms to wait for: map[apiserver:true system_pods:true]
	I1027 22:52:07.655455  418392 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:52:07.659669  418392 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 22:52:07.659689  418392 node_conditions.go:123] node cpu capacity is 2
	I1027 22:52:07.659699  418392 node_conditions.go:105] duration metric: took 4.239436ms to run NodePressure ...
	I1027 22:52:07.659710  418392 start.go:242] waiting for startup goroutines ...
	I1027 22:52:07.959618  418392 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-727712" context rescaled to 1 replicas
	I1027 22:52:07.959648  418392 start.go:247] waiting for cluster config update ...
	I1027 22:52:07.959660  418392 start.go:256] writing updated cluster config ...
	I1027 22:52:07.959959  418392 ssh_runner.go:195] Run: rm -f paused
	I1027 22:52:08.022066  418392 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 22:52:08.025976  418392 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-727712" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	470713fd5059f       a1894772a478e       11 seconds ago      Running             etcd                      0                   73a8daf09bfc4       etcd-scheduled-stop-727712                      kube-system
	ea6b40d79beaa       43911e833d64d       11 seconds ago      Running             kube-apiserver            0                   3e763dc732c25       kube-apiserver-scheduled-stop-727712            kube-system
	87d46ba327bb4       7eb2c6ff0c5a7       11 seconds ago      Running             kube-controller-manager   0                   12de0f5565b69       kube-controller-manager-scheduled-stop-727712   kube-system
	3465522f0e6fe       b5f57ec6b9867       11 seconds ago      Running             kube-scheduler            0                   5233bb03777c8       kube-scheduler-scheduled-stop-727712            kube-system
	
	
	==> containerd <==
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.230293306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-727712,Uid:a11bb2e02a7fcf4365c3eceb1a732bd5,Namespace:kube-system,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.236302098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-727712,Uid:f90249fd5db74f66b8c702404888f66b,Namespace:kube-system,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.246170147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-727712,Uid:36b305b48e9d4f07c3e68a639a017971,Namespace:kube-system,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.249545566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-727712,Uid:2f77f89dc401b2fdd5ce53b49edcb9c9,Namespace:kube-system,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.333200276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-727712,Uid:a11bb2e02a7fcf4365c3eceb1a732bd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5233bb03777c853709044870245c25eecc433539ff58d8adca485bf1f7440c45\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.342156266Z" level=info msg="CreateContainer within sandbox \"5233bb03777c853709044870245c25eecc433539ff58d8adca485bf1f7440c45\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.363184486Z" level=info msg="CreateContainer within sandbox \"5233bb03777c853709044870245c25eecc433539ff58d8adca485bf1f7440c45\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3465522f0e6fecaf49e0a70169a576ed0728b6b203a509748e14fb8abb0054d1\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.363957179Z" level=info msg="StartContainer for \"3465522f0e6fecaf49e0a70169a576ed0728b6b203a509748e14fb8abb0054d1\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.441675771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-727712,Uid:36b305b48e9d4f07c3e68a639a017971,Namespace:kube-system,Attempt:0,} returns sandbox id \"12de0f5565b6977e9fd798f57b3100fda5662da154002d146ef68993b2a8e822\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.449316014Z" level=info msg="CreateContainer within sandbox \"12de0f5565b6977e9fd798f57b3100fda5662da154002d146ef68993b2a8e822\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.469030670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-727712,Uid:2f77f89dc401b2fdd5ce53b49edcb9c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e763dc732c25f72bf880a5a741acd59183f6d12a5fd3e76f66b87e05566171d\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.479050227Z" level=info msg="CreateContainer within sandbox \"12de0f5565b6977e9fd798f57b3100fda5662da154002d146ef68993b2a8e822\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"87d46ba327bb4fa01b948b9d8c95d50b61e6c69547b9cfc1ea1bd3ed84116de0\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.480435226Z" level=info msg="StartContainer for \"87d46ba327bb4fa01b948b9d8c95d50b61e6c69547b9cfc1ea1bd3ed84116de0\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.483575968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-727712,Uid:f90249fd5db74f66b8c702404888f66b,Namespace:kube-system,Attempt:0,} returns sandbox id \"73a8daf09bfc4efeebffb912068785007009d6edb4a803683c67b9dffe9977eb\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.484659154Z" level=info msg="CreateContainer within sandbox \"3e763dc732c25f72bf880a5a741acd59183f6d12a5fd3e76f66b87e05566171d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.506597292Z" level=info msg="CreateContainer within sandbox \"73a8daf09bfc4efeebffb912068785007009d6edb4a803683c67b9dffe9977eb\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.552979020Z" level=info msg="CreateContainer within sandbox \"3e763dc732c25f72bf880a5a741acd59183f6d12a5fd3e76f66b87e05566171d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea6b40d79beaa878730654815431d2160efdf71c999a6f0103e1f83f6849c43b\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.555310695Z" level=info msg="StartContainer for \"ea6b40d79beaa878730654815431d2160efdf71c999a6f0103e1f83f6849c43b\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.592719482Z" level=info msg="CreateContainer within sandbox \"73a8daf09bfc4efeebffb912068785007009d6edb4a803683c67b9dffe9977eb\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"470713fd5059f25961f796c5a9231ed8665ec82e7c9c9073d24d476488b86aeb\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.598848152Z" level=info msg="StartContainer for \"3465522f0e6fecaf49e0a70169a576ed0728b6b203a509748e14fb8abb0054d1\" returns successfully"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.606605352Z" level=info msg="StartContainer for \"470713fd5059f25961f796c5a9231ed8665ec82e7c9c9073d24d476488b86aeb\""
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.610446704Z" level=info msg="StartContainer for \"87d46ba327bb4fa01b948b9d8c95d50b61e6c69547b9cfc1ea1bd3ed84116de0\" returns successfully"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.741252874Z" level=info msg="StartContainer for \"ea6b40d79beaa878730654815431d2160efdf71c999a6f0103e1f83f6849c43b\" returns successfully"
	Oct 27 22:51:58 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:51:58.757225460Z" level=info msg="StartContainer for \"470713fd5059f25961f796c5a9231ed8665ec82e7c9c9073d24d476488b86aeb\" returns successfully"
	Oct 27 22:52:09 scheduled-stop-727712 containerd[763]: time="2025-10-27T22:52:09.466655048Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               scheduled-stop-727712
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-727712
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=scheduled-stop-727712
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_52_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:52:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-727712
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:52:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:52:05 +0000   Mon, 27 Oct 2025 22:51:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:52:05 +0000   Mon, 27 Oct 2025 22:51:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:52:05 +0000   Mon, 27 Oct 2025 22:51:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 22:52:05 +0000   Mon, 27 Oct 2025 22:51:59 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-727712
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                edba9d75-2e97-418e-b3c9-1495c89d2650
	  Boot ID:                    9ceac5df-4f07-4c4c-b81a-a03ec3534783
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-727712                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-727712             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-727712    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-727712             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   NodeAllocatableEnforced  12s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node scheduled-stop-727712 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node scheduled-stop-727712 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x7 over 12s)  kubelet          Node scheduled-stop-727712 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s                 kubelet          Node scheduled-stop-727712 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet          Node scheduled-stop-727712 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet          Node scheduled-stop-727712 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           0s                 node-controller  Node scheduled-stop-727712 event: Registered Node scheduled-stop-727712 in Controller
	
	
	==> dmesg <==
	[Oct27 21:49] overlayfs: idmapped layers are currently not supported
	[ +30.761565] overlayfs: idmapped layers are currently not supported
	[Oct27 21:51] overlayfs: idmapped layers are currently not supported
	[Oct27 21:53] overlayfs: idmapped layers are currently not supported
	[Oct27 21:54] overlayfs: idmapped layers are currently not supported
	[Oct27 21:55] overlayfs: idmapped layers are currently not supported
	[Oct27 21:59] overlayfs: idmapped layers are currently not supported
	[Oct27 22:00] overlayfs: idmapped layers are currently not supported
	[ +24.025643] overlayfs: idmapped layers are currently not supported
	[Oct27 22:01] overlayfs: idmapped layers are currently not supported
	[Oct27 22:02] overlayfs: idmapped layers are currently not supported
	[ +55.889286] overlayfs: idmapped layers are currently not supported
	[Oct27 22:03] overlayfs: idmapped layers are currently not supported
	[Oct27 22:04] overlayfs: idmapped layers are currently not supported
	[Oct27 22:05] overlayfs: idmapped layers are currently not supported
	[ +46.465103] overlayfs: idmapped layers are currently not supported
	[Oct27 22:06] overlayfs: idmapped layers are currently not supported
	[  +0.509504] overlayfs: idmapped layers are currently not supported
	[Oct27 22:07] overlayfs: idmapped layers are currently not supported
	[Oct27 22:08] overlayfs: idmapped layers are currently not supported
	[Oct27 22:09] overlayfs: idmapped layers are currently not supported
	[Oct27 22:10] overlayfs: idmapped layers are currently not supported
	[Oct27 22:11] overlayfs: idmapped layers are currently not supported
	[ +52.535536] overlayfs: idmapped layers are currently not supported
	[Oct27 22:14] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [470713fd5059f25961f796c5a9231ed8665ec82e7c9c9073d24d476488b86aeb] <==
	{"level":"warn","ts":"2025-10-27T22:52:00.958720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:00.997714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:00.999573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.008578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.025570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.059492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.062297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.093534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.106688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.131180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.140165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.168850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.190464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.216280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.226161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.243611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.270342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.284214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.307133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.332316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.345029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.371174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.385129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.418658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:52:01.510010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:52:09 up  2:34,  0 user,  load average: 2.05, 2.15, 2.56
	Linux scheduled-stop-727712 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [ea6b40d79beaa878730654815431d2160efdf71c999a6f0103e1f83f6849c43b] <==
	I1027 22:52:02.512553       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 22:52:02.512637       1 policy_source.go:240] refreshing policies
	I1027 22:52:02.512721       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:52:02.514203       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:52:02.537995       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:52:02.626758       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:52:02.626928       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 22:52:02.725967       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:52:02.726078       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:52:03.168425       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:52:03.184484       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:52:03.184734       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:52:04.168873       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:52:04.245669       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:52:04.351617       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:52:04.411109       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:52:04.437794       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 22:52:04.440537       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:52:04.449167       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:52:05.605803       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:52:05.625059       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:52:05.639484       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:52:09.461409       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:52:09.468268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:52:09.902704       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [87d46ba327bb4fa01b948b9d8c95d50b61e6c69547b9cfc1ea1bd3ed84116de0] <==
	I1027 22:52:09.396935       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:52:09.396985       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:52:09.397147       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:52:09.398190       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:52:09.399276       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:52:09.399947       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 22:52:09.400045       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:52:09.400218       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:52:09.400625       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:52:09.403836       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:52:09.407099       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:52:09.407346       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:52:09.416609       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:52:09.430117       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:52:09.444874       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:52:09.444972       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:52:09.445044       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="scheduled-stop-727712"
	I1027 22:52:09.445087       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 22:52:09.445118       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 22:52:09.445696       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:52:09.447481       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:52:09.447910       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:52:09.462253       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:52:09.462275       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:52:09.462282       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-scheduler [3465522f0e6fecaf49e0a70169a576ed0728b6b203a509748e14fb8abb0054d1] <==
	I1027 22:52:01.519354       1 serving.go:386] Generated self-signed cert in-memory
	I1027 22:52:04.766942       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:52:04.766981       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:52:04.772596       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:52:04.772679       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 22:52:04.772709       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 22:52:04.772747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:52:04.778972       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:52:04.779023       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:52:04.779162       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:52:04.779175       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:52:04.872798       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 22:52:04.879624       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:52:04.879690       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: I1027 22:52:06.552248    1505 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: I1027 22:52:06.640344    1505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-727712" podStartSLOduration=1.640324867 podStartE2EDuration="1.640324867s" podCreationTimestamp="2025-10-27 22:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:52:06.597495537 +0000 UTC m=+1.161614495" watchObservedRunningTime="2025-10-27 22:52:06.640324867 +0000 UTC m=+1.204443825"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: I1027 22:52:06.655640    1505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-727712"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: I1027 22:52:06.662636    1505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-727712"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: I1027 22:52:06.663182    1505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-727712" podStartSLOduration=1.663167489 podStartE2EDuration="1.663167489s" podCreationTimestamp="2025-10-27 22:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:52:06.640650335 +0000 UTC m=+1.204769301" watchObservedRunningTime="2025-10-27 22:52:06.663167489 +0000 UTC m=+1.227286496"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: I1027 22:52:06.663339    1505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-727712" podStartSLOduration=1.663331216 podStartE2EDuration="1.663331216s" podCreationTimestamp="2025-10-27 22:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:52:06.662636235 +0000 UTC m=+1.226755217" watchObservedRunningTime="2025-10-27 22:52:06.663331216 +0000 UTC m=+1.227450199"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: E1027 22:52:06.668481    1505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-727712\" already exists" pod="kube-system/etcd-scheduled-stop-727712"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: E1027 22:52:06.675609    1505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-727712\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-727712"
	Oct 27 22:52:06 scheduled-stop-727712 kubelet[1505]: I1027 22:52:06.700722    1505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-727712" podStartSLOduration=1.700701471 podStartE2EDuration="1.700701471s" podCreationTimestamp="2025-10-27 22:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:52:06.685528581 +0000 UTC m=+1.249647538" watchObservedRunningTime="2025-10-27 22:52:06.700701471 +0000 UTC m=+1.264820429"
	Oct 27 22:52:09 scheduled-stop-727712 kubelet[1505]: I1027 22:52:09.465186    1505 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 22:52:09 scheduled-stop-727712 kubelet[1505]: I1027 22:52:09.466903    1505 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006012    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cd975eb-73fd-47bf-b537-2f092a7d019d-xtables-lock\") pod \"kindnet-jzkr4\" (UID: \"3cd975eb-73fd-47bf-b537-2f092a7d019d\") " pod="kube-system/kindnet-jzkr4"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006119    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2rtg\" (UniqueName: \"kubernetes.io/projected/f3e2228d-d053-48f5-94e8-42d9fcaf5226-kube-api-access-h2rtg\") pod \"kube-proxy-mfzjd\" (UID: \"f3e2228d-d053-48f5-94e8-42d9fcaf5226\") " pod="kube-system/kube-proxy-mfzjd"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006187    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3cd975eb-73fd-47bf-b537-2f092a7d019d-cni-cfg\") pod \"kindnet-jzkr4\" (UID: \"3cd975eb-73fd-47bf-b537-2f092a7d019d\") " pod="kube-system/kindnet-jzkr4"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006209    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3e2228d-d053-48f5-94e8-42d9fcaf5226-kube-proxy\") pod \"kube-proxy-mfzjd\" (UID: \"f3e2228d-d053-48f5-94e8-42d9fcaf5226\") " pod="kube-system/kube-proxy-mfzjd"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006259    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cd975eb-73fd-47bf-b537-2f092a7d019d-lib-modules\") pod \"kindnet-jzkr4\" (UID: \"3cd975eb-73fd-47bf-b537-2f092a7d019d\") " pod="kube-system/kindnet-jzkr4"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006330    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3e2228d-d053-48f5-94e8-42d9fcaf5226-xtables-lock\") pod \"kube-proxy-mfzjd\" (UID: \"f3e2228d-d053-48f5-94e8-42d9fcaf5226\") " pod="kube-system/kube-proxy-mfzjd"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006352    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6jf8\" (UniqueName: \"kubernetes.io/projected/3cd975eb-73fd-47bf-b537-2f092a7d019d-kube-api-access-r6jf8\") pod \"kindnet-jzkr4\" (UID: \"3cd975eb-73fd-47bf-b537-2f092a7d019d\") " pod="kube-system/kindnet-jzkr4"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: I1027 22:52:10.006506    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3e2228d-d053-48f5-94e8-42d9fcaf5226-lib-modules\") pod \"kube-proxy-mfzjd\" (UID: \"f3e2228d-d053-48f5-94e8-42d9fcaf5226\") " pod="kube-system/kube-proxy-mfzjd"
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: E1027 22:52:10.131375    1505 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: E1027 22:52:10.131406    1505 projected.go:196] Error preparing data for projected volume kube-api-access-r6jf8 for pod kube-system/kindnet-jzkr4: configmap "kube-root-ca.crt" not found
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: E1027 22:52:10.131481    1505 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3cd975eb-73fd-47bf-b537-2f092a7d019d-kube-api-access-r6jf8 podName:3cd975eb-73fd-47bf-b537-2f092a7d019d nodeName:}" failed. No retries permitted until 2025-10-27 22:52:10.631456607 +0000 UTC m=+5.195575565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r6jf8" (UniqueName: "kubernetes.io/projected/3cd975eb-73fd-47bf-b537-2f092a7d019d-kube-api-access-r6jf8") pod "kindnet-jzkr4" (UID: "3cd975eb-73fd-47bf-b537-2f092a7d019d") : configmap "kube-root-ca.crt" not found
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: E1027 22:52:10.132070    1505 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: E1027 22:52:10.132091    1505 projected.go:196] Error preparing data for projected volume kube-api-access-h2rtg for pod kube-system/kube-proxy-mfzjd: configmap "kube-root-ca.crt" not found
	Oct 27 22:52:10 scheduled-stop-727712 kubelet[1505]: E1027 22:52:10.132141    1505 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3e2228d-d053-48f5-94e8-42d9fcaf5226-kube-api-access-h2rtg podName:f3e2228d-d053-48f5-94e8-42d9fcaf5226 nodeName:}" failed. No retries permitted until 2025-10-27 22:52:10.632124199 +0000 UTC m=+5.196243165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h2rtg" (UniqueName: "kubernetes.io/projected/f3e2228d-d053-48f5-94e8-42d9fcaf5226-kube-api-access-h2rtg") pod "kube-proxy-mfzjd" (UID: "f3e2228d-d053-48f5-94e8-42d9fcaf5226") : configmap "kube-root-ca.crt" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-727712 -n scheduled-stop-727712
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-727712 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-tsptq kindnet-jzkr4 kube-proxy-mfzjd storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-727712 describe pod coredns-66bc5c9577-tsptq kindnet-jzkr4 kube-proxy-mfzjd storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-727712 describe pod coredns-66bc5c9577-tsptq kindnet-jzkr4 kube-proxy-mfzjd storage-provisioner: exit status 1 (118.622223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-tsptq" not found
	Error from server (NotFound): pods "kindnet-jzkr4" not found
	Error from server (NotFound): pods "kube-proxy-mfzjd" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-727712 describe pod coredns-66bc5c9577-tsptq kindnet-jzkr4 kube-proxy-mfzjd storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-727712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-727712
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-727712: (2.188646281s)
--- FAIL: TestScheduledStopUnix (36.43s)

                                                
                                    

Test pass (301/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.39
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 6.7
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 170.31
29 TestAddons/serial/Volcano 39.68
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.84
35 TestAddons/parallel/Registry 15.43
36 TestAddons/parallel/RegistryCreds 0.88
37 TestAddons/parallel/Ingress 20.38
38 TestAddons/parallel/InspektorGadget 5.33
39 TestAddons/parallel/MetricsServer 6.85
41 TestAddons/parallel/CSI 55.95
42 TestAddons/parallel/Headlamp 18.14
43 TestAddons/parallel/CloudSpanner 5.94
44 TestAddons/parallel/LocalPath 8.72
45 TestAddons/parallel/NvidiaDevicePlugin 6.62
46 TestAddons/parallel/Yakd 12
48 TestAddons/StoppedEnableDisable 12.38
49 TestCertOptions 39.43
50 TestCertExpiration 233.36
52 TestForceSystemdFlag 47.94
53 TestForceSystemdEnv 47.16
54 TestDockerEnvContainerd 50.84
58 TestErrorSpam/setup 32.22
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.09
61 TestErrorSpam/pause 1.76
62 TestErrorSpam/unpause 1.9
63 TestErrorSpam/stop 1.61
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.72
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.32
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.75
75 TestFunctional/serial/CacheCmd/cache/add_local 1.18
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 42.41
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.29
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 9.16
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.15
97 TestFunctional/parallel/ServiceCmdConnect 7.7
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 24.99
101 TestFunctional/parallel/SSHCmd 0.73
102 TestFunctional/parallel/CpCmd 2.49
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.25
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
127 TestFunctional/parallel/ServiceCmd/List 0.6
128 TestFunctional/parallel/ProfileCmd/profile_list 0.5
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.66
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.79
132 TestFunctional/parallel/MountCmd/any-port 7.98
133 TestFunctional/parallel/ServiceCmd/Format 0.59
134 TestFunctional/parallel/ServiceCmd/URL 0.48
135 TestFunctional/parallel/MountCmd/specific-port 2.65
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.07
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.29
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.09
144 TestFunctional/parallel/ImageCommands/Setup 0.68
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.37
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 216.01
163 TestMultiControlPlane/serial/DeployApp 44.05
164 TestMultiControlPlane/serial/PingHostFromPods 1.66
165 TestMultiControlPlane/serial/AddWorkerNode 61.12
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.2
168 TestMultiControlPlane/serial/CopyFile 20.9
169 TestMultiControlPlane/serial/StopSecondaryNode 13
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.41
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.54
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 108.58
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.52
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
176 TestMultiControlPlane/serial/StopCluster 36.47
177 TestMultiControlPlane/serial/RestartCluster 62.55
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 81.28
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 49.96
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.72
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.64
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.02
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 41.9
211 TestKicCustomNetwork/use_default_bridge_network 38.14
212 TestKicExistingNetwork 36.47
213 TestKicCustomSubnet 37.38
214 TestKicStaticIP 38.78
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.36
219 TestMountStart/serial/StartWithMountFirst 8.66
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 10.29
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.74
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.31
226 TestMountStart/serial/RestartStopped 7.47
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 139.08
231 TestMultiNode/serial/DeployApp2Nodes 4.92
232 TestMultiNode/serial/PingHostFrom2Pods 1.03
233 TestMultiNode/serial/AddNode 28.33
234 TestMultiNode/serial/MultiNodeLabels 0.08
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.79
237 TestMultiNode/serial/StopNode 2.74
238 TestMultiNode/serial/StartAfterStop 8.35
239 TestMultiNode/serial/RestartKeepsNodes 80.57
240 TestMultiNode/serial/DeleteNode 5.75
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 51.97
243 TestMultiNode/serial/ValidateNameConflict 36.97
248 TestPreload 123.59
253 TestInsufficientStorage 13.18
254 TestRunningBinaryUpgrade 60.07
256 TestKubernetesUpgrade 350.99
257 TestMissingContainerUpgrade 177.23
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 48.29
261 TestNoKubernetes/serial/StartWithStopK8s 18.29
262 TestNoKubernetes/serial/Start 7.88
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 0.7
265 TestNoKubernetes/serial/Stop 1.3
266 TestNoKubernetes/serial/StartNoArgs 7.24
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
268 TestStoppedBinaryUpgrade/Setup 1.55
269 TestStoppedBinaryUpgrade/Upgrade 65.82
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.5
279 TestPause/serial/Start 80.61
280 TestPause/serial/SecondStartNoReconfiguration 7.27
281 TestPause/serial/Pause 0.85
282 TestPause/serial/VerifyStatus 0.34
283 TestPause/serial/Unpause 0.8
284 TestPause/serial/PauseAgain 1.16
285 TestPause/serial/DeletePaused 3.09
286 TestPause/serial/VerifyDeletedResources 0.46
294 TestNetworkPlugins/group/false 5.73
299 TestStartStop/group/old-k8s-version/serial/FirstStart 60.56
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
302 TestStartStop/group/old-k8s-version/serial/Stop 12.07
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/old-k8s-version/serial/SecondStart 28.79
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
308 TestStartStop/group/old-k8s-version/serial/Pause 3.09
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.63
312 TestStartStop/group/embed-certs/serial/FirstStart 56.69
313 TestStartStop/group/embed-certs/serial/DeployApp 8.37
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
315 TestStartStop/group/embed-certs/serial/Stop 12.52
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.48
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.66
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
319 TestStartStop/group/embed-certs/serial/SecondStart 50.65
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.39
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.36
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.81
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
326 TestStartStop/group/embed-certs/serial/Pause 3.22
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
329 TestStartStop/group/no-preload/serial/FirstStart 70.23
330 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.12
331 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
332 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.82
334 TestStartStop/group/newest-cni/serial/FirstStart 45.45
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
337 TestStartStop/group/newest-cni/serial/Stop 1.41
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 17.86
340 TestStartStop/group/no-preload/serial/DeployApp 9.6
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.68
342 TestStartStop/group/no-preload/serial/Stop 14.67
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
346 TestStartStop/group/newest-cni/serial/Pause 3.06
347 TestNetworkPlugins/group/auto/Start 89.95
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
349 TestStartStop/group/no-preload/serial/SecondStart 55.61
350 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
352 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
353 TestStartStop/group/no-preload/serial/Pause 3.23
354 TestNetworkPlugins/group/kindnet/Start 84.08
355 TestNetworkPlugins/group/auto/KubeletFlags 0.34
356 TestNetworkPlugins/group/auto/NetCatPod 9.36
357 TestNetworkPlugins/group/auto/DNS 0.26
358 TestNetworkPlugins/group/auto/Localhost 0.2
359 TestNetworkPlugins/group/auto/HairPin 0.22
360 TestNetworkPlugins/group/calico/Start 56.28
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
363 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
364 TestNetworkPlugins/group/kindnet/DNS 0.25
365 TestNetworkPlugins/group/kindnet/Localhost 0.18
366 TestNetworkPlugins/group/kindnet/HairPin 0.17
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.5
369 TestNetworkPlugins/group/calico/NetCatPod 12.54
370 TestNetworkPlugins/group/calico/DNS 0.26
371 TestNetworkPlugins/group/calico/Localhost 0.27
372 TestNetworkPlugins/group/calico/HairPin 0.23
373 TestNetworkPlugins/group/custom-flannel/Start 69
374 TestNetworkPlugins/group/enable-default-cni/Start 80.5
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
377 TestNetworkPlugins/group/custom-flannel/DNS 0.24
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
380 TestNetworkPlugins/group/flannel/Start 63.5
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
386 TestNetworkPlugins/group/bridge/Start 48.08
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
389 TestNetworkPlugins/group/flannel/NetCatPod 10.34
390 TestNetworkPlugins/group/flannel/DNS 0.26
391 TestNetworkPlugins/group/flannel/Localhost 0.27
392 TestNetworkPlugins/group/flannel/HairPin 0.15
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
394 TestNetworkPlugins/group/bridge/NetCatPod 9.4
395 TestNetworkPlugins/group/bridge/DNS 0.24
396 TestNetworkPlugins/group/bridge/Localhost 0.21
397 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.28.0/json-events (6.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-041569 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-041569 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.393678384s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 22:14:49.492123  271448 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1027 22:14:49.492206  271448 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-041569
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-041569: exit status 85 (92.523237ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-041569 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-041569 │ jenkins │ v1.37.0 │ 27 Oct 25 22:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:14:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:14:43.144586  271453 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:14:43.144825  271453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:14:43.144839  271453 out.go:374] Setting ErrFile to fd 2...
	I1027 22:14:43.144844  271453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:14:43.145111  271453 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	W1027 22:14:43.145295  271453 root.go:316] Error reading config file at /home/jenkins/minikube-integration/21790-269600/.minikube/config/config.json: open /home/jenkins/minikube-integration/21790-269600/.minikube/config/config.json: no such file or directory
	I1027 22:14:43.145713  271453 out.go:368] Setting JSON to true
	I1027 22:14:43.146591  271453 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7034,"bootTime":1761596250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1027 22:14:43.146662  271453 start.go:143] virtualization:  
	I1027 22:14:43.150674  271453 out.go:99] [download-only-041569] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1027 22:14:43.150866  271453 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball: no such file or directory
	I1027 22:14:43.150962  271453 notify.go:221] Checking for updates...
	I1027 22:14:43.153803  271453 out.go:171] MINIKUBE_LOCATION=21790
	I1027 22:14:43.156830  271453 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:14:43.159830  271453 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	I1027 22:14:43.162661  271453 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	I1027 22:14:43.165625  271453 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1027 22:14:43.171356  271453 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 22:14:43.171653  271453 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:14:43.202910  271453 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:14:43.203078  271453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:14:43.261704  271453 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-27 22:14:43.251931451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:14:43.261811  271453 docker.go:318] overlay module found
	I1027 22:14:43.265079  271453 out.go:99] Using the docker driver based on user configuration
	I1027 22:14:43.265119  271453 start.go:307] selected driver: docker
	I1027 22:14:43.265127  271453 start.go:928] validating driver "docker" against <nil>
	I1027 22:14:43.265245  271453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:14:43.336423  271453 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-27 22:14:43.327059037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:14:43.336584  271453 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:14:43.336903  271453 start_flags.go:409] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1027 22:14:43.337068  271453 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:14:43.340294  271453 out.go:171] Using Docker driver with root privileges
	I1027 22:14:43.343451  271453 cni.go:84] Creating CNI manager for ""
	I1027 22:14:43.343537  271453 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1027 22:14:43.343550  271453 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:14:43.343646  271453 start.go:351] cluster config:
	{Name:download-only-041569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-041569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:14:43.346780  271453 out.go:99] Starting "download-only-041569" primary control-plane node in "download-only-041569" cluster
	I1027 22:14:43.346809  271453 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1027 22:14:43.349754  271453 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:14:43.349797  271453 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1027 22:14:43.349986  271453 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:14:43.365986  271453 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:14:43.366186  271453 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 22:14:43.366286  271453 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:14:43.415848  271453 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1027 22:14:43.415876  271453 cache.go:59] Caching tarball of preloaded images
	I1027 22:14:43.416058  271453 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1027 22:14:43.419477  271453 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 22:14:43.419517  271453 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1027 22:14:43.516751  271453 preload.go:290] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1027 22:14:43.516897  271453 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-041569 host does not exist
	  To start a cluster, run: "minikube start -p download-only-041569"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-041569
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (6.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-669583 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-669583 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.699795625s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (6.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 22:14:56.653805  271448 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1027 22:14:56.653841  271448 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-669583
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-669583: exit status 85 (93.67711ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-041569 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-041569 │ jenkins │ v1.37.0 │ 27 Oct 25 22:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 22:14 UTC │ 27 Oct 25 22:14 UTC │
	│ delete  │ -p download-only-041569                                                                                                                                                               │ download-only-041569 │ jenkins │ v1.37.0 │ 27 Oct 25 22:14 UTC │ 27 Oct 25 22:14 UTC │
	│ start   │ -o=json --download-only -p download-only-669583 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-669583 │ jenkins │ v1.37.0 │ 27 Oct 25 22:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:14:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:14:49.999215  271661 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:14:49.999481  271661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:14:49.999512  271661 out.go:374] Setting ErrFile to fd 2...
	I1027 22:14:50.001910  271661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:14:50.002274  271661 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:14:50.008783  271661 out.go:368] Setting JSON to true
	I1027 22:14:50.010159  271661 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7040,"bootTime":1761596250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1027 22:14:50.010259  271661 start.go:143] virtualization:  
	I1027 22:14:50.017674  271661 out.go:99] [download-only-669583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:14:50.018121  271661 notify.go:221] Checking for updates...
	I1027 22:14:50.021357  271661 out.go:171] MINIKUBE_LOCATION=21790
	I1027 22:14:50.024881  271661 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:14:50.028044  271661 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	I1027 22:14:50.033262  271661 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	I1027 22:14:50.036391  271661 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1027 22:14:50.042617  271661 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 22:14:50.042940  271661 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:14:50.078318  271661 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:14:50.078447  271661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:14:50.140200  271661 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-27 22:14:50.130348093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:14:50.140316  271661 docker.go:318] overlay module found
	I1027 22:14:50.143416  271661 out.go:99] Using the docker driver based on user configuration
	I1027 22:14:50.143462  271661 start.go:307] selected driver: docker
	I1027 22:14:50.143470  271661 start.go:928] validating driver "docker" against <nil>
	I1027 22:14:50.143598  271661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:14:50.199888  271661 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-27 22:14:50.190417601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:14:50.200043  271661 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:14:50.200339  271661 start_flags.go:409] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1027 22:14:50.200497  271661 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:14:50.203761  271661 out.go:171] Using Docker driver with root privileges
	I1027 22:14:50.206598  271661 cni.go:84] Creating CNI manager for ""
	I1027 22:14:50.206672  271661 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1027 22:14:50.206687  271661 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:14:50.206783  271661 start.go:351] cluster config:
	{Name:download-only-669583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-669583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:14:50.209816  271661 out.go:99] Starting "download-only-669583" primary control-plane node in "download-only-669583" cluster
	I1027 22:14:50.209841  271661 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1027 22:14:50.212862  271661 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:14:50.212913  271661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1027 22:14:50.213107  271661 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:14:50.229345  271661 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:14:50.229486  271661 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 22:14:50.229510  271661 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 22:14:50.229516  271661 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 22:14:50.229524  271661 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 22:14:50.273594  271661 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1027 22:14:50.273628  271661 cache.go:59] Caching tarball of preloaded images
	I1027 22:14:50.273795  271661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1027 22:14:50.276904  271661 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1027 22:14:50.276927  271661 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1027 22:14:50.377887  271661 preload.go:290] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1027 22:14:50.377943  271661 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21790-269600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-669583 host does not exist
	  To start a cluster, run: "minikube start -p download-only-669583"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-669583
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 22:14:57.815269  271448 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-833141 --alsologtostderr --binary-mirror http://127.0.0.1:40753 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-833141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-833141
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-437249
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-437249: exit status 85 (74.6207ms)

                                                
                                                
-- stdout --
	* Profile "addons-437249" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-437249"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-437249
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-437249: exit status 85 (76.34052ms)

                                                
                                                
-- stdout --
	* Profile "addons-437249" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-437249"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (170.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-437249 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-437249 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m50.307772827s)
--- PASS: TestAddons/Setup (170.31s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.68s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 65.452583ms
addons_test.go:868: volcano-scheduler stabilized in 66.120486ms
addons_test.go:876: volcano-admission stabilized in 66.319274ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-g4bkn" [88d557be-81af-4f8d-ad3e-02313f0f045f] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004873741s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-mt6v4" [bf943659-ea5d-4b9d-84fb-d5c6c215a0a3] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004273593s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-ksq7p" [2db34c2e-b5a2-4733-aac2-ac36892c3e56] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002991879s
addons_test.go:903: (dbg) Run:  kubectl --context addons-437249 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-437249 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-437249 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [291c8e31-ddb9-4ef4-ab9d-d98961436a33] Pending
helpers_test.go:352: "test-job-nginx-0" [291c8e31-ddb9-4ef4-ab9d-d98961436a33] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [291c8e31-ddb9-4ef4-ab9d-d98961436a33] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.005195564s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-437249 addons disable volcano --alsologtostderr -v=1: (12.031520207s)
--- PASS: TestAddons/serial/Volcano (39.68s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-437249 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-437249 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-437249 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-437249 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9dcf9ae8-ee86-4ca5-83ba-b4159c371701] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9dcf9ae8-ee86-4ca5-83ba-b4159c371701] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003971959s
addons_test.go:694: (dbg) Run:  kubectl --context addons-437249 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-437249 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-437249 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-437249 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.84s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.114852ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-mrc9x" [956cca00-4657-4809-841e-14ed156d67ae] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008220192s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5pf8j" [c43e65cd-88cd-4116-bfcc-1b6fa86ab861] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012846118s
addons_test.go:392: (dbg) Run:  kubectl --context addons-437249 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-437249 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-437249 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.40119736s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 ip
2025/10/27 22:19:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.43s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.033104ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-437249
addons_test.go:332: (dbg) Run:  kubectl --context addons-437249 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-437249 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-437249 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-437249 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7bcee608-1bfb-4941-8262-bf6d4a7b9a13] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7bcee608-1bfb-4941-8262-bf6d4a7b9a13] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004032512s
I1027 22:19:52.773833  271448 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-437249 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-437249 addons disable ingress-dns --alsologtostderr -v=1: (1.632138471s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-437249 addons disable ingress --alsologtostderr -v=1: (8.081182851s)
--- PASS: TestAddons/parallel/Ingress (20.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cf4vx" [81321eb6-a2ec-499b-a16a-b39f5da10b34] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004031842s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.023914ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4lzgz" [0cd1feb8-954a-48f1-b4c6-9bb89f266a1c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008030664s
addons_test.go:463: (dbg) Run:  kubectl --context addons-437249 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 22:19:13.284101  271448 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 22:19:13.291766  271448 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 22:19:13.291795  271448 kapi.go:107] duration metric: took 10.55584ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 10.567138ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-437249 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-437249 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b5e59920-43cf-4d27-af20-c9f2a455d907] Pending
helpers_test.go:352: "task-pv-pod" [b5e59920-43cf-4d27-af20-c9f2a455d907] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b5e59920-43cf-4d27-af20-c9f2a455d907] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003686132s
addons_test.go:572: (dbg) Run:  kubectl --context addons-437249 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-437249 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-437249 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-437249 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-437249 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-437249 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-437249 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8825561d-0e29-4699-a951-efdfa0b020ef] Pending
helpers_test.go:352: "task-pv-pod-restore" [8825561d-0e29-4699-a951-efdfa0b020ef] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8825561d-0e29-4699-a951-efdfa0b020ef] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003944337s
addons_test.go:614: (dbg) Run:  kubectl --context addons-437249 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-437249 delete pod task-pv-pod-restore: (1.26833083s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-437249 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-437249 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-437249 addons disable volumesnapshots --alsologtostderr -v=1: (1.28436603s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-437249 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.86351202s)
--- PASS: TestAddons/parallel/CSI (55.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-437249 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-437249 --alsologtostderr -v=1: (1.281715651s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-kmc9f" [f42ebcc4-824f-49a9-aab7-6e01de85c3f2] Pending
helpers_test.go:352: "headlamp-6945c6f4d-kmc9f" [f42ebcc4-824f-49a9-aab7-6e01de85c3f2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-kmc9f" [f42ebcc4-824f-49a9-aab7-6e01de85c3f2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003581295s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-437249 addons disable headlamp --alsologtostderr -v=1: (5.854061402s)
--- PASS: TestAddons/parallel/Headlamp (18.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-qjl92" [63379fd0-6eb0-4fd0-90cb-1c09734cce65] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003658247s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.94s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.72s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-437249 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-437249 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-437249 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [0eb412d7-a1da-492e-bee1-c6ce05636d19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [0eb412d7-a1da-492e-bee1-c6ce05636d19] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [0eb412d7-a1da-492e-bee1-c6ce05636d19] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004139054s
addons_test.go:967: (dbg) Run:  kubectl --context addons-437249 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 ssh "cat /opt/local-path-provisioner/pvc-d1caa4fb-f88e-4b38-a37b-0e35cccee915_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-437249 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-437249 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.72s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qcxvf" [6c769dbe-976e-4791-abf4-67e856dfadd7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004146796s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-mztww" [df83826f-307f-454d-8f5a-8dc9ef791b74] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003464449s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-437249 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-437249 addons disable yakd --alsologtostderr -v=1: (5.995750753s)
--- PASS: TestAddons/parallel/Yakd (12.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-437249
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-437249: (12.097423814s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-437249
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-437249
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-437249
--- PASS: TestAddons/StoppedEnableDisable (12.38s)

                                                
                                    
x
+
TestCertOptions (39.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-553301 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-553301 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.558506763s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-553301 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-553301 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-553301 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-553301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-553301
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-553301: (2.11591377s)
--- PASS: TestCertOptions (39.43s)

                                                
                                    
x
+
TestCertExpiration (233.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-956631 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-956631 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.362580977s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-956631 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-956631 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.299263871s)
helpers_test.go:175: Cleaning up "cert-expiration-956631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-956631
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-956631: (2.693785754s)
--- PASS: TestCertExpiration (233.36s)

                                                
                                    
x
+
TestForceSystemdFlag (47.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-246216 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-246216 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.99488659s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-246216 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-246216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-246216
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-246216: (2.491865271s)
--- PASS: TestForceSystemdFlag (47.94s)

                                                
                                    
x
+
TestForceSystemdEnv (47.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-092094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-092094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.05603575s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-092094 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-092094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-092094
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-092094: (2.708278608s)
--- PASS: TestForceSystemdEnv (47.16s)

                                                
                                    
x
+
TestDockerEnvContainerd (50.84s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-068814 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-068814 --driver=docker  --container-runtime=containerd: (34.174340654s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-068814"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-068814": (1.122683278s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5DC64HL4Fhsd/agent.291467" SSH_AGENT_PID="291468" DOCKER_HOST=ssh://docker@127.0.0.1:33138 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5DC64HL4Fhsd/agent.291467" SSH_AGENT_PID="291468" DOCKER_HOST=ssh://docker@127.0.0.1:33138 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5DC64HL4Fhsd/agent.291467" SSH_AGENT_PID="291468" DOCKER_HOST=ssh://docker@127.0.0.1:33138 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.321600386s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5DC64HL4Fhsd/agent.291467" SSH_AGENT_PID="291468" DOCKER_HOST=ssh://docker@127.0.0.1:33138 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-5DC64HL4Fhsd/agent.291467" SSH_AGENT_PID="291468" DOCKER_HOST=ssh://docker@127.0.0.1:33138 docker image ls": (1.218746592s)
helpers_test.go:175: Cleaning up "dockerenv-068814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-068814
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-068814: (2.470214854s)
--- PASS: TestDockerEnvContainerd (50.84s)

                                                
                                    
x
+
TestErrorSpam/setup (32.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-123373 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-123373 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-123373 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-123373 --driver=docker  --container-runtime=containerd: (32.223733632s)
--- PASS: TestErrorSpam/setup (32.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 stop: (1.416740555s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-123373 --log_dir /tmp/nospam-123373 stop
--- PASS: TestErrorSpam/stop (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21790-269600/.minikube/files/etc/test/nested/copy/271448/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1027 22:22:48.840709  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:48.847111  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:48.858462  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:48.879904  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:48.921283  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:49.003540  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:49.165301  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:49.487054  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:50.128868  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:51.410234  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:53.971655  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:22:59.093366  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:23:09.335866  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-735759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m23.720299147s)
--- PASS: TestFunctional/serial/StartWithProxy (83.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 22:23:25.995205  271448 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735759 --alsologtostderr -v=8
E1027 22:23:29.817530  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-735759 --alsologtostderr -v=8: (7.321695419s)
functional_test.go:678: soft start took 7.324282422s for "functional-735759" cluster.
I1027 22:23:33.317251  271448 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-735759 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 cache add registry.k8s.io/pause:3.1: (1.404637957s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 cache add registry.k8s.io/pause:3.3: (1.178572191s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 cache add registry.k8s.io/pause:latest: (1.16331541s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-735759 /tmp/TestFunctionalserialCacheCmdcacheadd_local2164142025/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cache add minikube-local-cache-test:functional-735759
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cache delete minikube-local-cache-test:functional-735759
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-735759
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.28303ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 cache reload: (1.006882065s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 kubectl -- --context functional-735759 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-735759 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1027 22:24:10.778896  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-735759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.413969365s)
functional_test.go:776: restart took 42.414066473s for "functional-735759" cluster.
I1027 22:24:23.639437  271448 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-735759 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 logs: (1.485465171s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 logs --file /tmp/TestFunctionalserialLogsFileCmd804518780/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 logs --file /tmp/TestFunctionalserialLogsFileCmd804518780/001/logs.txt: (1.487082802s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-735759 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-735759
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-735759: exit status 115 (465.097149ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32067 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-735759 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 config get cpus: exit status 14 (99.149754ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 config get cpus: exit status 14 (70.944793ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-735759 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-735759 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 307048: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-735759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (214.085466ms)

                                                
                                                
-- stdout --
	* [functional-735759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:25:03.063322  306677 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:25:03.063699  306677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:25:03.063747  306677 out.go:374] Setting ErrFile to fd 2...
	I1027 22:25:03.063767  306677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:25:03.064076  306677 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:25:03.064496  306677 out.go:368] Setting JSON to false
	I1027 22:25:03.065526  306677 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7653,"bootTime":1761596250,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1027 22:25:03.065640  306677 start.go:143] virtualization:  
	I1027 22:25:03.068922  306677 out.go:179] * [functional-735759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:25:03.071761  306677 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:25:03.071831  306677 notify.go:221] Checking for updates...
	I1027 22:25:03.077677  306677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:25:03.080901  306677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	I1027 22:25:03.083798  306677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	I1027 22:25:03.086698  306677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:25:03.091353  306677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:25:03.094747  306677 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:25:03.095364  306677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:25:03.129655  306677 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:25:03.129813  306677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:25:03.198243  306677 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 22:25:03.188991675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:25:03.198359  306677 docker.go:318] overlay module found
	I1027 22:25:03.201824  306677 out.go:179] * Using the docker driver based on existing profile
	I1027 22:25:03.204563  306677 start.go:307] selected driver: docker
	I1027 22:25:03.204577  306677 start.go:928] validating driver "docker" against &{Name:functional-735759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-735759 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:25:03.204716  306677 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:25:03.208358  306677 out.go:203] 
	W1027 22:25:03.211268  306677 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 22:25:03.214029  306677 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735759 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-735759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-735759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (218.66273ms)

                                                
                                                
-- stdout --
	* [functional-735759] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:25:02.840451  306631 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:25:02.840638  306631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:25:02.840661  306631 out.go:374] Setting ErrFile to fd 2...
	I1027 22:25:02.840686  306631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:25:02.841639  306631 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:25:02.842088  306631 out.go:368] Setting JSON to false
	I1027 22:25:02.843129  306631 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7653,"bootTime":1761596250,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1027 22:25:02.843234  306631 start.go:143] virtualization:  
	I1027 22:25:02.846653  306631 out.go:179] * [functional-735759] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1027 22:25:02.849680  306631 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:25:02.849855  306631 notify.go:221] Checking for updates...
	I1027 22:25:02.855820  306631 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:25:02.858756  306631 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	I1027 22:25:02.861827  306631 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	I1027 22:25:02.864814  306631 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:25:02.867672  306631 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:25:02.871152  306631 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:25:02.871756  306631 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:25:02.908517  306631 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:25:02.908630  306631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:25:02.979469  306631 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 22:25:02.97008347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:25:02.979586  306631 docker.go:318] overlay module found
	I1027 22:25:02.982760  306631 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1027 22:25:02.985702  306631 start.go:307] selected driver: docker
	I1027 22:25:02.985736  306631 start.go:928] validating driver "docker" against &{Name:functional-735759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-735759 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:25:02.985862  306631 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:25:02.989548  306631 out.go:203] 
	W1027 22:25:02.992612  306631 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 22:25:02.995604  306631 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-735759 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-735759 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-stwnm" [d8cd5d44-f098-4eed-b97b-8ca878291640] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-stwnm" [d8cd5d44-f098-4eed-b97b-8ca878291640] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003314406s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30279
functional_test.go:1680: http://192.168.49.2:30279: success! body:
Request served by hello-node-connect-7d85dfc575-stwnm

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30279
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7f6ecb98-27b2-4b71-a933-e02103e9ce17] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004010127s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-735759 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-735759 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-735759 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-735759 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3db9b35a-1c4d-4f8c-88f3-cc6d144acc0c] Pending
helpers_test.go:352: "sp-pod" [3db9b35a-1c4d-4f8c-88f3-cc6d144acc0c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3db9b35a-1c4d-4f8c-88f3-cc6d144acc0c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003933421s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-735759 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-735759 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-735759 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3fe48ce8-a053-450c-a57f-d042a1e8a50b] Pending
helpers_test.go:352: "sp-pod" [3fe48ce8-a053-450c-a57f-d042a1e8a50b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3fe48ce8-a053-450c-a57f-d042a1e8a50b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003769025s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-735759 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh -n functional-735759 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cp functional-735759:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd765697170/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh -n functional-735759 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh -n functional-735759 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/271448/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo cat /etc/test/nested/copy/271448/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/271448.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo cat /etc/ssl/certs/271448.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/271448.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo cat /usr/share/ca-certificates/271448.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2714482.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo cat /etc/ssl/certs/2714482.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2714482.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo cat /usr/share/ca-certificates/2714482.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-735759 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 ssh "sudo systemctl is-active docker": exit status 1 (320.837701ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 ssh "sudo systemctl is-active crio": exit status 1 (306.739799ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-735759 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-735759 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-735759 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 304083: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-735759 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-735759 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-735759 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4f57cd36-97cd-4ba1-b98e-bdc2e62e0af2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4f57cd36-97cd-4ba1-b98e-bdc2e62e0af2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003678045s
I1027 22:24:42.348241  271448 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-735759 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.137.246 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-735759 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-735759 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-735759 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bqg46" [bbe4ce2f-3042-4f94-a4bd-e4edec130763] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-bqg46" [bbe4ce2f-3042-4f94-a4bd-e4edec130763] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003892907s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "413.922423ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "81.301663ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 service list -o json
functional_test.go:1504: Took "562.617954ms" to run "out/minikube-linux-arm64 -p functional-735759 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "444.463282ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "214.711498ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32066
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdany-port2350033851/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761603900214030166" to /tmp/TestFunctionalparallelMountCmdany-port2350033851/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761603900214030166" to /tmp/TestFunctionalparallelMountCmdany-port2350033851/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761603900214030166" to /tmp/TestFunctionalparallelMountCmdany-port2350033851/001/test-1761603900214030166
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 27 22:25 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 27 22:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 27 22:25 test-1761603900214030166
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh cat /mount-9p/test-1761603900214030166
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-735759 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [fd54ab72-e5fb-44f5-934f-9d7bda37d055] Pending
helpers_test.go:352: "busybox-mount" [fd54ab72-e5fb-44f5-934f-9d7bda37d055] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [fd54ab72-e5fb-44f5-934f-9d7bda37d055] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [fd54ab72-e5fb-44f5-934f-9d7bda37d055] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003650569s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-735759 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdany-port2350033851/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32066
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdspecific-port1923155805/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (604.017603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:25:08.798043  271448 retry.go:31] will retry after 691.514186ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdspecific-port1923155805/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 ssh "sudo umount -f /mount-9p": exit status 1 (375.740672ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-735759 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdspecific-port1923155805/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1817123281/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1817123281/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1817123281/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T" /mount1: exit status 1 (640.475161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:25:11.486168  271448 retry.go:31] will retry after 486.188546ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T" /mount2
2025/10/27 22:25:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-735759 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1817123281/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1817123281/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-735759 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1817123281/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 version -o=json --components: (1.293876683s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735759 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-735759
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-735759
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735759 image ls --format short --alsologtostderr:
I1027 22:25:19.998209  309836 out.go:360] Setting OutFile to fd 1 ...
I1027 22:25:19.998359  309836 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:19.998366  309836 out.go:374] Setting ErrFile to fd 2...
I1027 22:25:19.998370  309836 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:19.998650  309836 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
I1027 22:25:19.999333  309836 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:19.999469  309836 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:20.000006  309836 cli_runner.go:164] Run: docker container inspect functional-735759 --format={{.State.Status}}
I1027 22:25:20.022133  309836 ssh_runner.go:195] Run: systemctl --version
I1027 22:25:20.022191  309836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735759
I1027 22:25:20.051764  309836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/functional-735759/id_rsa Username:docker}
I1027 22:25:20.169295  309836 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735759 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-735759  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/nginx                     │ latest             │ sha256:e612b9 │ 58.3MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-735759  │ sha256:767eee │ 992B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:9c92f5 │ 23.1MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735759 image ls --format table --alsologtostderr:
I1027 22:25:20.301779  309915 out.go:360] Setting OutFile to fd 1 ...
I1027 22:25:20.302030  309915 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:20.302050  309915 out.go:374] Setting ErrFile to fd 2...
I1027 22:25:20.302056  309915 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:20.302408  309915 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
I1027 22:25:20.303029  309915 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:20.303151  309915 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:20.303602  309915 cli_runner.go:164] Run: docker container inspect functional-735759 --format={{.State.Status}}
I1027 22:25:20.328657  309915 ssh_runner.go:195] Run: systemctl --version
I1027 22:25:20.328738  309915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735759
I1027 22:25:20.355777  309915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/functional-735759/id_rsa Username:docker}
I1027 22:25:20.473713  309915 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735759 image ls --format json --alsologtostderr:
[{"id":"sha256:e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903"],"repoTags":["docker.io/library/nginx:latest"],"size":"58257398"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],
"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-735759"],"size":"2173567"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"rep
oTags":[],"size":"18306114"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23078652"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigest
s":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:767eee98907069afcbeaf3a6e247c1cf0b06803bd6eeb6b171b31d1571185492","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-735759"],"size":"992"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a92
36d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735759 image ls --format json --alsologtostderr:
I1027 22:25:20.576313  310013 out.go:360] Setting OutFile to fd 1 ...
I1027 22:25:20.576418  310013 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:20.576424  310013 out.go:374] Setting ErrFile to fd 2...
I1027 22:25:20.576428  310013 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:20.576747  310013 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
I1027 22:25:20.577650  310013 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:20.577819  310013 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:20.578423  310013 cli_runner.go:164] Run: docker container inspect functional-735759 --format={{.State.Status}}
I1027 22:25:20.627198  310013 ssh_runner.go:195] Run: systemctl --version
I1027 22:25:20.627262  310013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735759
I1027 22:25:20.660995  310013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/functional-735759/id_rsa Username:docker}
I1027 22:25:20.775354  310013 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-735759 image ls --format yaml --alsologtostderr:
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:767eee98907069afcbeaf3a6e247c1cf0b06803bd6eeb6b171b31d1571185492
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-735759
size: "992"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-735759
size: "2173567"
- id: sha256:9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "23078652"
- id: sha256:e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
repoTags:
- docker.io/library/nginx:latest
size: "58257398"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735759 image ls --format yaml --alsologtostderr:
I1027 22:25:19.989746  309837 out.go:360] Setting OutFile to fd 1 ...
I1027 22:25:19.990008  309837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:19.990039  309837 out.go:374] Setting ErrFile to fd 2...
I1027 22:25:19.990075  309837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:19.990654  309837 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
I1027 22:25:19.991937  309837 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:19.992174  309837 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:19.993025  309837 cli_runner.go:164] Run: docker container inspect functional-735759 --format={{.State.Status}}
I1027 22:25:20.021285  309837 ssh_runner.go:195] Run: systemctl --version
I1027 22:25:20.021345  309837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735759
I1027 22:25:20.044259  309837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/functional-735759/id_rsa Username:docker}
I1027 22:25:20.151850  309837 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-735759 ssh pgrep buildkitd: exit status 1 (369.481512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image build -t localhost/my-image:functional-735759 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 image build -t localhost/my-image:functional-735759 testdata/build --alsologtostderr: (3.480301357s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-735759 image build -t localhost/my-image:functional-735759 testdata/build --alsologtostderr:
I1027 22:25:20.658514  310019 out.go:360] Setting OutFile to fd 1 ...
I1027 22:25:20.659370  310019 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:20.659414  310019 out.go:374] Setting ErrFile to fd 2...
I1027 22:25:20.659433  310019 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:25:20.659743  310019 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
I1027 22:25:20.660446  310019 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:20.663757  310019 config.go:182] Loaded profile config "functional-735759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1027 22:25:20.664374  310019 cli_runner.go:164] Run: docker container inspect functional-735759 --format={{.State.Status}}
I1027 22:25:20.690727  310019 ssh_runner.go:195] Run: systemctl --version
I1027 22:25:20.690812  310019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-735759
I1027 22:25:20.708892  310019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/functional-735759/id_rsa Username:docker}
I1027 22:25:20.816974  310019 build_images.go:162] Building image from path: /tmp/build.2409152868.tar
I1027 22:25:20.817067  310019 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 22:25:20.825538  310019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2409152868.tar
I1027 22:25:20.829309  310019 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2409152868.tar: stat -c "%s %y" /var/lib/minikube/build/build.2409152868.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2409152868.tar': No such file or directory
I1027 22:25:20.829339  310019 ssh_runner.go:362] scp /tmp/build.2409152868.tar --> /var/lib/minikube/build/build.2409152868.tar (3072 bytes)
I1027 22:25:20.849928  310019 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2409152868
I1027 22:25:20.858244  310019 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2409152868 -xf /var/lib/minikube/build/build.2409152868.tar
I1027 22:25:20.867443  310019 containerd.go:394] Building image: /var/lib/minikube/build/build.2409152868
I1027 22:25:20.867520  310019 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2409152868 --local dockerfile=/var/lib/minikube/build/build.2409152868 --output type=image,name=localhost/my-image:functional-735759
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:b066ce1c953e7113d266de8500d67a50d9142b58438758a160c3736c72eed56f 0.0s done
#8 exporting config sha256:b5f39a8218b0ac9e35fefd2ba6616d4bd68651ef207f13112f9509cb8119ec41 0.0s done
#8 naming to localhost/my-image:functional-735759 done
#8 DONE 0.2s
I1027 22:25:24.022969  310019 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2409152868 --local dockerfile=/var/lib/minikube/build/build.2409152868 --output type=image,name=localhost/my-image:functional-735759: (3.155419034s)
I1027 22:25:24.023039  310019 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2409152868
I1027 22:25:24.034504  310019 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2409152868.tar
I1027 22:25:24.045141  310019 build_images.go:218] Built localhost/my-image:functional-735759 from /tmp/build.2409152868.tar
I1027 22:25:24.045172  310019 build_images.go:134] succeeded building to: functional-735759
I1027 22:25:24.045177  310019 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-735759
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image load --daemon kicbase/echo-server:functional-735759 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 image load --daemon kicbase/echo-server:functional-735759 --alsologtostderr: (1.049411739s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image load --daemon kicbase/echo-server:functional-735759 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-735759 image load --daemon kicbase/echo-server:functional-735759 --alsologtostderr: (1.060994057s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-735759
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image load --daemon kicbase/echo-server:functional-735759 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image save kicbase/echo-server:functional-735759 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image rm kicbase/echo-server:functional-735759 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-735759
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-735759 image save --daemon kicbase/echo-server:functional-735759 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-735759
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-735759
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-735759
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-735759
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1027 22:25:32.700770  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:27:48.838274  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:28:16.542537  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m35.094816938s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (216.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (44.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 kubectl -- rollout status deployment/busybox: (4.759827894s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1027 22:29:08.323423  271448 retry.go:31] will retry after 1.052810234s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1027 22:29:09.554936  271448 retry.go:31] will retry after 2.234580369s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1027 22:29:11.989106  271448 retry.go:31] will retry after 3.070877816s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1027 22:29:15.222683  271448 retry.go:31] will retry after 3.937802348s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1027 22:29:19.329899  271448 retry.go:31] will retry after 3.201523422s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1027 22:29:22.703348  271448 retry.go:31] will retry after 10.834006361s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E1027 22:29:32.907549  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:32.914063  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:32.925654  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:32.947176  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:32.988587  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:33.070103  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:33.231667  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
E1027 22:29:33.553774  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1027 22:29:33.700163  271448 retry.go:31] will retry after 10.650187092s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.2.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E1027 22:29:34.195372  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:35.476981  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:38.039835  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:43.161885  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-kl5m6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-mr2ks -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-wfts7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-kl5m6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-mr2ks -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-wfts7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-kl5m6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-mr2ks -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-wfts7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (44.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-kl5m6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-kl5m6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-mr2ks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-mr2ks -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-wfts7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 kubectl -- exec busybox-7b57f96db7-wfts7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 node add --alsologtostderr -v 5
E1027 22:29:53.403852  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:30:13.885606  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 node add --alsologtostderr -v 5: (1m0.017096954s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5: (1.102938683s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-911812 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.1951658s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 status --output json --alsologtostderr -v 5: (1.039811886s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp testdata/cp-test.txt ha-911812:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1159375207/001/cp-test_ha-911812.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812:/home/docker/cp-test.txt ha-911812-m02:/home/docker/cp-test_ha-911812_ha-911812-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test_ha-911812_ha-911812-m02.txt"
E1027 22:30:54.847087  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812:/home/docker/cp-test.txt ha-911812-m03:/home/docker/cp-test_ha-911812_ha-911812-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test_ha-911812_ha-911812-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812:/home/docker/cp-test.txt ha-911812-m04:/home/docker/cp-test_ha-911812_ha-911812-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test_ha-911812_ha-911812-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp testdata/cp-test.txt ha-911812-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1159375207/001/cp-test_ha-911812-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m02:/home/docker/cp-test.txt ha-911812:/home/docker/cp-test_ha-911812-m02_ha-911812.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test_ha-911812-m02_ha-911812.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m02:/home/docker/cp-test.txt ha-911812-m03:/home/docker/cp-test_ha-911812-m02_ha-911812-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test_ha-911812-m02_ha-911812-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m02:/home/docker/cp-test.txt ha-911812-m04:/home/docker/cp-test_ha-911812-m02_ha-911812-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test_ha-911812-m02_ha-911812-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp testdata/cp-test.txt ha-911812-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1159375207/001/cp-test_ha-911812-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m03:/home/docker/cp-test.txt ha-911812:/home/docker/cp-test_ha-911812-m03_ha-911812.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test_ha-911812-m03_ha-911812.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m03:/home/docker/cp-test.txt ha-911812-m02:/home/docker/cp-test_ha-911812-m03_ha-911812-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test_ha-911812-m03_ha-911812-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m03:/home/docker/cp-test.txt ha-911812-m04:/home/docker/cp-test_ha-911812-m03_ha-911812-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test_ha-911812-m03_ha-911812-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp testdata/cp-test.txt ha-911812-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1159375207/001/cp-test_ha-911812-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m04:/home/docker/cp-test.txt ha-911812:/home/docker/cp-test_ha-911812-m04_ha-911812.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812 "sudo cat /home/docker/cp-test_ha-911812-m04_ha-911812.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m04:/home/docker/cp-test.txt ha-911812-m02:/home/docker/cp-test_ha-911812-m04_ha-911812-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m02 "sudo cat /home/docker/cp-test_ha-911812-m04_ha-911812-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 cp ha-911812-m04:/home/docker/cp-test.txt ha-911812-m03:/home/docker/cp-test_ha-911812-m04_ha-911812-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 ssh -n ha-911812-m03 "sudo cat /home/docker/cp-test_ha-911812-m04_ha-911812-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 node stop m02 --alsologtostderr -v 5: (12.162451363s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5: exit status 7 (835.980258ms)

                                                
                                                
-- stdout --
	ha-911812
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-911812-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911812-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-911812-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:31:24.425636  327034 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:31:24.425779  327034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:31:24.425790  327034 out.go:374] Setting ErrFile to fd 2...
	I1027 22:31:24.425795  327034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:31:24.426055  327034 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:31:24.426239  327034 out.go:368] Setting JSON to false
	I1027 22:31:24.426281  327034 mustload.go:66] Loading cluster: ha-911812
	I1027 22:31:24.426340  327034 notify.go:221] Checking for updates...
	I1027 22:31:24.427682  327034 config.go:182] Loaded profile config "ha-911812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:31:24.427709  327034 status.go:174] checking status of ha-911812 ...
	I1027 22:31:24.428226  327034 cli_runner.go:164] Run: docker container inspect ha-911812 --format={{.State.Status}}
	I1027 22:31:24.453533  327034 status.go:371] ha-911812 host status = "Running" (err=<nil>)
	I1027 22:31:24.453567  327034 host.go:66] Checking if "ha-911812" exists ...
	I1027 22:31:24.453980  327034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-911812
	I1027 22:31:24.491304  327034 host.go:66] Checking if "ha-911812" exists ...
	I1027 22:31:24.491693  327034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:31:24.491749  327034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-911812
	I1027 22:31:24.512154  327034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/ha-911812/id_rsa Username:docker}
	I1027 22:31:24.622293  327034 ssh_runner.go:195] Run: systemctl --version
	I1027 22:31:24.629170  327034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:31:24.650477  327034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:31:24.713260  327034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-27 22:31:24.703042106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:31:24.713828  327034 kubeconfig.go:125] found "ha-911812" server: "https://192.168.49.254:8443"
	I1027 22:31:24.713864  327034 api_server.go:166] Checking apiserver status ...
	I1027 22:31:24.713913  327034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:31:24.727244  327034 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	I1027 22:31:24.736482  327034 api_server.go:182] apiserver freezer: "12:freezer:/docker/b7a7357c169fa7f08683457c19cc014924c32267cda0e65a08df99ce2e145fcc/kubepods/burstable/pod6e82de36191122d3266672aa7f4f02cb/188f7a0dbbf443ec5e441876bed1031591193be0e86ad017d90f262c43015719"
	I1027 22:31:24.736565  327034 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b7a7357c169fa7f08683457c19cc014924c32267cda0e65a08df99ce2e145fcc/kubepods/burstable/pod6e82de36191122d3266672aa7f4f02cb/188f7a0dbbf443ec5e441876bed1031591193be0e86ad017d90f262c43015719/freezer.state
	I1027 22:31:24.744638  327034 api_server.go:204] freezer state: "THAWED"
	I1027 22:31:24.744674  327034 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 22:31:24.758034  327034 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 22:31:24.758067  327034 status.go:463] ha-911812 apiserver status = Running (err=<nil>)
	I1027 22:31:24.758080  327034 status.go:176] ha-911812 status: &{Name:ha-911812 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:31:24.758105  327034 status.go:174] checking status of ha-911812-m02 ...
	I1027 22:31:24.758412  327034 cli_runner.go:164] Run: docker container inspect ha-911812-m02 --format={{.State.Status}}
	I1027 22:31:24.777544  327034 status.go:371] ha-911812-m02 host status = "Stopped" (err=<nil>)
	I1027 22:31:24.777568  327034 status.go:384] host is not running, skipping remaining checks
	I1027 22:31:24.777575  327034 status.go:176] ha-911812-m02 status: &{Name:ha-911812-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:31:24.777597  327034 status.go:174] checking status of ha-911812-m03 ...
	I1027 22:31:24.777927  327034 cli_runner.go:164] Run: docker container inspect ha-911812-m03 --format={{.State.Status}}
	I1027 22:31:24.795674  327034 status.go:371] ha-911812-m03 host status = "Running" (err=<nil>)
	I1027 22:31:24.795709  327034 host.go:66] Checking if "ha-911812-m03" exists ...
	I1027 22:31:24.796019  327034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-911812-m03
	I1027 22:31:24.814114  327034 host.go:66] Checking if "ha-911812-m03" exists ...
	I1027 22:31:24.814458  327034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:31:24.814502  327034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-911812-m03
	I1027 22:31:24.834888  327034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/ha-911812-m03/id_rsa Username:docker}
	I1027 22:31:24.939154  327034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:31:24.952735  327034 kubeconfig.go:125] found "ha-911812" server: "https://192.168.49.254:8443"
	I1027 22:31:24.952766  327034 api_server.go:166] Checking apiserver status ...
	I1027 22:31:24.952893  327034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:31:24.966093  327034 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1345/cgroup
	I1027 22:31:24.976695  327034 api_server.go:182] apiserver freezer: "12:freezer:/docker/fb2f3682768b8b07b33f57c49a337e3c209ecc7da5e8224a694ac6d79654b49f/kubepods/burstable/pod2eb2900f73aeac54d1e5046083fa3cad/a24c702152cd058068fcdf08bf6a938dca4daa4a393c10455deb9f8f6591fe89"
	I1027 22:31:24.976838  327034 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fb2f3682768b8b07b33f57c49a337e3c209ecc7da5e8224a694ac6d79654b49f/kubepods/burstable/pod2eb2900f73aeac54d1e5046083fa3cad/a24c702152cd058068fcdf08bf6a938dca4daa4a393c10455deb9f8f6591fe89/freezer.state
	I1027 22:31:24.985393  327034 api_server.go:204] freezer state: "THAWED"
	I1027 22:31:24.985422  327034 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 22:31:24.994798  327034 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 22:31:24.994836  327034 status.go:463] ha-911812-m03 apiserver status = Running (err=<nil>)
	I1027 22:31:24.994846  327034 status.go:176] ha-911812-m03 status: &{Name:ha-911812-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:31:24.994887  327034 status.go:174] checking status of ha-911812-m04 ...
	I1027 22:31:24.995239  327034 cli_runner.go:164] Run: docker container inspect ha-911812-m04 --format={{.State.Status}}
	I1027 22:31:25.019870  327034 status.go:371] ha-911812-m04 host status = "Running" (err=<nil>)
	I1027 22:31:25.019900  327034 host.go:66] Checking if "ha-911812-m04" exists ...
	I1027 22:31:25.020224  327034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-911812-m04
	I1027 22:31:25.049126  327034 host.go:66] Checking if "ha-911812-m04" exists ...
	I1027 22:31:25.049477  327034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:31:25.049544  327034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-911812-m04
	I1027 22:31:25.067704  327034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/ha-911812-m04/id_rsa Username:docker}
	I1027 22:31:25.178996  327034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:31:25.192923  327034 status.go:176] ha-911812-m04 status: &{Name:ha-911812-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 node start m02 --alsologtostderr -v 5: (12.684508421s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5: (1.572328686s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.538998231s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 stop --alsologtostderr -v 5
E1027 22:32:16.770054  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 stop --alsologtostderr -v 5: (38.160661992s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 start --wait true --alsologtostderr -v 5
E1027 22:32:48.838257  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 start --wait true --alsologtostderr -v 5: (1m10.202447627s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 node delete m03 --alsologtostderr -v 5: (9.525718136s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 stop --alsologtostderr -v 5: (36.354197231s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5: exit status 7 (114.464631ms)

                                                
                                                
-- stdout --
	ha-911812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911812-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-911812-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:34:18.327121  342224 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:34:18.327325  342224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:18.327353  342224 out.go:374] Setting ErrFile to fd 2...
	I1027 22:34:18.327373  342224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:18.327654  342224 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:34:18.327898  342224 out.go:368] Setting JSON to false
	I1027 22:34:18.327958  342224 mustload.go:66] Loading cluster: ha-911812
	I1027 22:34:18.328024  342224 notify.go:221] Checking for updates...
	I1027 22:34:18.329156  342224 config.go:182] Loaded profile config "ha-911812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:34:18.329212  342224 status.go:174] checking status of ha-911812 ...
	I1027 22:34:18.329854  342224 cli_runner.go:164] Run: docker container inspect ha-911812 --format={{.State.Status}}
	I1027 22:34:18.347783  342224 status.go:371] ha-911812 host status = "Stopped" (err=<nil>)
	I1027 22:34:18.347803  342224 status.go:384] host is not running, skipping remaining checks
	I1027 22:34:18.347809  342224 status.go:176] ha-911812 status: &{Name:ha-911812 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:34:18.347838  342224 status.go:174] checking status of ha-911812-m02 ...
	I1027 22:34:18.348134  342224 cli_runner.go:164] Run: docker container inspect ha-911812-m02 --format={{.State.Status}}
	I1027 22:34:18.368934  342224 status.go:371] ha-911812-m02 host status = "Stopped" (err=<nil>)
	I1027 22:34:18.368956  342224 status.go:384] host is not running, skipping remaining checks
	I1027 22:34:18.368963  342224 status.go:176] ha-911812-m02 status: &{Name:ha-911812-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:34:18.368987  342224 status.go:174] checking status of ha-911812-m04 ...
	I1027 22:34:18.369291  342224 cli_runner.go:164] Run: docker container inspect ha-911812-m04 --format={{.State.Status}}
	I1027 22:34:18.393841  342224 status.go:371] ha-911812-m04 host status = "Stopped" (err=<nil>)
	I1027 22:34:18.393866  342224 status.go:384] host is not running, skipping remaining checks
	I1027 22:34:18.393873  342224 status.go:176] ha-911812-m04 status: &{Name:ha-911812-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (62.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1027 22:34:32.905009  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:35:00.612413  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m1.55677921s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (62.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 node add --control-plane --alsologtostderr -v 5: (1m20.121635487s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-911812 status --alsologtostderr -v 5: (1.161819862s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.078567255s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-452968 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-452968 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (49.959146096s)
--- PASS: TestJSONOutput/start/Command (49.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-452968 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-452968 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-452968 --output=json --user=testUser
E1027 22:37:48.839066  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-452968 --output=json --user=testUser: (6.024268657s)
--- PASS: TestJSONOutput/stop/Command (6.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-284854 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-284854 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.040988ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a99a2d4c-603e-43a8-82db-05b4d96f6270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-284854] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0fcee51-e56d-4bfc-a3e3-39c4d5d2b6dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21790"}}
	{"specversion":"1.0","id":"71d65a83-7c8c-4db6-851a-ad4fdb272057","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b79df2a5-b2c0-4e43-82bc-40505e622c48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig"}}
	{"specversion":"1.0","id":"be3fb0f4-bec1-43f6-a200-720e81defbf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube"}}
	{"specversion":"1.0","id":"6c56cb76-c503-4716-9cd1-9ef452803955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e6fcfb99-2387-4161-a4bd-6b881b4d0304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70aec817-f5e3-482d-8a7c-81654dc46081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-284854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-284854
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-755425 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-755425 --network=: (39.639397617s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-755425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-755425
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-755425: (2.223440099s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.90s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-600305 --network=bridge
E1027 22:39:11.904713  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-600305 --network=bridge: (35.977864791s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-600305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-600305
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-600305: (2.135616305s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.14s)

                                                
                                    
x
+
TestKicExistingNetwork (36.47s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1027 22:39:14.951967  271448 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1027 22:39:14.967787  271448 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1027 22:39:14.967869  271448 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1027 22:39:14.967887  271448 cli_runner.go:164] Run: docker network inspect existing-network
W1027 22:39:14.986374  271448 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1027 22:39:14.986410  271448 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1027 22:39:14.986432  271448 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1027 22:39:14.986537  271448 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1027 22:39:15.025374  271448 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-743a90b7240a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:c0:8c:48:2b:2c} reservation:<nil>}
I1027 22:39:15.025707  271448 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400030ac80}
I1027 22:39:15.025727  271448 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1027 22:39:15.025783  271448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1027 22:39:15.104438  271448 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-465106 --network=existing-network
E1027 22:39:32.908053  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-465106 --network=existing-network: (34.146457631s)
helpers_test.go:175: Cleaning up "existing-network-465106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-465106
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-465106: (2.138817741s)
I1027 22:39:51.408111  271448 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.47s)

                                                
                                    
x
+
TestKicCustomSubnet (37.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-858132 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-858132 --subnet=192.168.60.0/24: (35.169571361s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-858132 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-858132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-858132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-858132: (2.186332189s)
--- PASS: TestKicCustomSubnet (37.38s)

                                                
                                    
x
+
TestKicStaticIP (38.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-688874 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-688874 --static-ip=192.168.200.200: (36.366950409s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-688874 ip
helpers_test.go:175: Cleaning up "static-ip-688874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-688874
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-688874: (2.251105661s)
--- PASS: TestKicStaticIP (38.78s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-911974 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-911974 --driver=docker  --container-runtime=containerd: (32.5847232s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-915240 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-915240 --driver=docker  --container-runtime=containerd: (35.019585177s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-911974
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-915240
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-915240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-915240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-915240: (2.226319487s)
helpers_test.go:175: Cleaning up "first-911974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-911974
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-911974: (2.038161905s)
--- PASS: TestMinikubeProfile (73.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-866388 --memory=3072 --mount-string /tmp/TestMountStartserial408541786/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-866388 --memory=3072 --mount-string /tmp/TestMountStartserial408541786/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.657484195s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-866388 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-868505 --memory=3072 --mount-string /tmp/TestMountStartserial408541786/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-868505 --memory=3072 --mount-string /tmp/TestMountStartserial408541786/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.291496288s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-868505 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-866388 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-866388 --alsologtostderr -v=5: (1.742470679s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-868505 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-868505
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-868505: (1.310260792s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-868505
E1027 22:42:48.838543  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-868505: (6.465826841s)
--- PASS: TestMountStart/serial/RestartStopped (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-868505 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-889386 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1027 22:44:32.905870  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-889386 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m18.534860672s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-889386 -- rollout status deployment/busybox: (3.064635309s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-2cklw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-s5j4f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-2cklw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-s5j4f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-2cklw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-s5j4f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-2cklw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-2cklw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-s5j4f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-889386 -- exec busybox-7b57f96db7-s5j4f -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-889386 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-889386 -v=5 --alsologtostderr: (27.599148599s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-889386 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp testdata/cp-test.txt multinode-889386:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile284255337/001/cp-test_multinode-889386.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386:/home/docker/cp-test.txt multinode-889386-m02:/home/docker/cp-test_multinode-889386_multinode-889386-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m02 "sudo cat /home/docker/cp-test_multinode-889386_multinode-889386-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386:/home/docker/cp-test.txt multinode-889386-m03:/home/docker/cp-test_multinode-889386_multinode-889386-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m03 "sudo cat /home/docker/cp-test_multinode-889386_multinode-889386-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp testdata/cp-test.txt multinode-889386-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile284255337/001/cp-test_multinode-889386-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386-m02:/home/docker/cp-test.txt multinode-889386:/home/docker/cp-test_multinode-889386-m02_multinode-889386.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386 "sudo cat /home/docker/cp-test_multinode-889386-m02_multinode-889386.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386-m02:/home/docker/cp-test.txt multinode-889386-m03:/home/docker/cp-test_multinode-889386-m02_multinode-889386-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m03 "sudo cat /home/docker/cp-test_multinode-889386-m02_multinode-889386-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp testdata/cp-test.txt multinode-889386-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile284255337/001/cp-test_multinode-889386-m03.txt
E1027 22:45:55.974125  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386-m03:/home/docker/cp-test.txt multinode-889386:/home/docker/cp-test_multinode-889386-m03_multinode-889386.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386 "sudo cat /home/docker/cp-test_multinode-889386-m03_multinode-889386.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 cp multinode-889386-m03:/home/docker/cp-test.txt multinode-889386-m02:/home/docker/cp-test_multinode-889386-m03_multinode-889386-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 ssh -n multinode-889386-m02 "sudo cat /home/docker/cp-test_multinode-889386-m03_multinode-889386-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-889386 node stop m03: (1.307651918s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-889386 status: exit status 7 (874.48085ms)

                                                
                                                
-- stdout --
	multinode-889386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-889386-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-889386-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr: exit status 7 (562.330366ms)

                                                
                                                
-- stdout --
	multinode-889386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-889386-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-889386-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:46:00.663561  395792 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:46:00.663690  395792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:46:00.663701  395792 out.go:374] Setting ErrFile to fd 2...
	I1027 22:46:00.663706  395792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:46:00.663968  395792 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:46:00.664239  395792 out.go:368] Setting JSON to false
	I1027 22:46:00.664277  395792 mustload.go:66] Loading cluster: multinode-889386
	I1027 22:46:00.664371  395792 notify.go:221] Checking for updates...
	I1027 22:46:00.664679  395792 config.go:182] Loaded profile config "multinode-889386": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:46:00.664698  395792 status.go:174] checking status of multinode-889386 ...
	I1027 22:46:00.665624  395792 cli_runner.go:164] Run: docker container inspect multinode-889386 --format={{.State.Status}}
	I1027 22:46:00.687360  395792 status.go:371] multinode-889386 host status = "Running" (err=<nil>)
	I1027 22:46:00.687384  395792 host.go:66] Checking if "multinode-889386" exists ...
	I1027 22:46:00.687742  395792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-889386
	I1027 22:46:00.721851  395792 host.go:66] Checking if "multinode-889386" exists ...
	I1027 22:46:00.722164  395792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:46:00.722216  395792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-889386
	I1027 22:46:00.742311  395792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/multinode-889386/id_rsa Username:docker}
	I1027 22:46:00.846456  395792 ssh_runner.go:195] Run: systemctl --version
	I1027 22:46:00.852837  395792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:46:00.865546  395792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:46:00.932060  395792 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 22:46:00.92126246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:46:00.932620  395792 kubeconfig.go:125] found "multinode-889386" server: "https://192.168.67.2:8443"
	I1027 22:46:00.932661  395792 api_server.go:166] Checking apiserver status ...
	I1027 22:46:00.932708  395792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:46:00.945622  395792 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	I1027 22:46:00.954259  395792 api_server.go:182] apiserver freezer: "12:freezer:/docker/5c1500979ac4e9004a05b1d8c52fb8c6209d38274d58e3b650b4d7656f9f3718/kubepods/burstable/pod98c45fb1c3bb05d91318ebaa35203aae/364979448cabec0131b0426429807dbe8313f2d12f15adcd4fefa4bfa530ca05"
	I1027 22:46:00.954343  395792 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c1500979ac4e9004a05b1d8c52fb8c6209d38274d58e3b650b4d7656f9f3718/kubepods/burstable/pod98c45fb1c3bb05d91318ebaa35203aae/364979448cabec0131b0426429807dbe8313f2d12f15adcd4fefa4bfa530ca05/freezer.state
	I1027 22:46:00.962309  395792 api_server.go:204] freezer state: "THAWED"
	I1027 22:46:00.962340  395792 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1027 22:46:00.970586  395792 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1027 22:46:00.970624  395792 status.go:463] multinode-889386 apiserver status = Running (err=<nil>)
	I1027 22:46:00.970637  395792 status.go:176] multinode-889386 status: &{Name:multinode-889386 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:46:00.970654  395792 status.go:174] checking status of multinode-889386-m02 ...
	I1027 22:46:00.970986  395792 cli_runner.go:164] Run: docker container inspect multinode-889386-m02 --format={{.State.Status}}
	I1027 22:46:00.988677  395792 status.go:371] multinode-889386-m02 host status = "Running" (err=<nil>)
	I1027 22:46:00.988702  395792 host.go:66] Checking if "multinode-889386-m02" exists ...
	I1027 22:46:00.989087  395792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-889386-m02
	I1027 22:46:01.007335  395792 host.go:66] Checking if "multinode-889386-m02" exists ...
	I1027 22:46:01.007659  395792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:46:01.007712  395792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-889386-m02
	I1027 22:46:01.026055  395792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/21790-269600/.minikube/machines/multinode-889386-m02/id_rsa Username:docker}
	I1027 22:46:01.130031  395792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:46:01.143066  395792 status.go:176] multinode-889386-m02 status: &{Name:multinode-889386-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:46:01.143100  395792 status.go:174] checking status of multinode-889386-m03 ...
	I1027 22:46:01.143418  395792 cli_runner.go:164] Run: docker container inspect multinode-889386-m03 --format={{.State.Status}}
	I1027 22:46:01.161784  395792 status.go:371] multinode-889386-m03 host status = "Stopped" (err=<nil>)
	I1027 22:46:01.161855  395792 status.go:384] host is not running, skipping remaining checks
	I1027 22:46:01.161862  395792 status.go:176] multinode-889386-m03 status: &{Name:multinode-889386-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-889386 node start m03 -v=5 --alsologtostderr: (7.535563535s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-889386
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-889386
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-889386: (25.157155279s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-889386 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-889386 --wait=true -v=5 --alsologtostderr: (55.222196269s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-889386
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-889386 node delete m03: (5.052104481s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 stop
E1027 22:47:48.838990  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-889386 stop: (23.903712973s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-889386 status: exit status 7 (93.092747ms)

                                                
                                                
-- stdout --
	multinode-889386
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-889386-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr: exit status 7 (101.62599ms)

                                                
                                                
-- stdout --
	multinode-889386
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-889386-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:47:59.892986  404623 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:47:59.893163  404623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:47:59.893194  404623 out.go:374] Setting ErrFile to fd 2...
	I1027 22:47:59.893217  404623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:47:59.893496  404623 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:47:59.893703  404623 out.go:368] Setting JSON to false
	I1027 22:47:59.893785  404623 mustload.go:66] Loading cluster: multinode-889386
	I1027 22:47:59.893846  404623 notify.go:221] Checking for updates...
	I1027 22:47:59.894825  404623 config.go:182] Loaded profile config "multinode-889386": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:47:59.894871  404623 status.go:174] checking status of multinode-889386 ...
	I1027 22:47:59.895436  404623 cli_runner.go:164] Run: docker container inspect multinode-889386 --format={{.State.Status}}
	I1027 22:47:59.915205  404623 status.go:371] multinode-889386 host status = "Stopped" (err=<nil>)
	I1027 22:47:59.915226  404623 status.go:384] host is not running, skipping remaining checks
	I1027 22:47:59.915233  404623 status.go:176] multinode-889386 status: &{Name:multinode-889386 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:47:59.915268  404623 status.go:174] checking status of multinode-889386-m02 ...
	I1027 22:47:59.915558  404623 cli_runner.go:164] Run: docker container inspect multinode-889386-m02 --format={{.State.Status}}
	I1027 22:47:59.945779  404623 status.go:371] multinode-889386-m02 host status = "Stopped" (err=<nil>)
	I1027 22:47:59.945800  404623 status.go:384] host is not running, skipping remaining checks
	I1027 22:47:59.945807  404623 status.go:176] multinode-889386-m02 status: &{Name:multinode-889386-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-889386 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-889386 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.274871652s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-889386 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-889386
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-889386-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-889386-m02 --driver=docker  --container-runtime=containerd: exit status 14 (104.70484ms)

                                                
                                                
-- stdout --
	* [multinode-889386-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-889386-m02' is duplicated with machine name 'multinode-889386-m02' in profile 'multinode-889386'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-889386-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-889386-m03 --driver=docker  --container-runtime=containerd: (34.292759756s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-889386
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-889386: exit status 80 (388.444013ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-889386 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-889386-m03 already exists in multinode-889386-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-889386-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-889386-m03: (2.130986922s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.97s)

                                                
                                    
x
+
TestPreload (123.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-289848 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-289848 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m0.364643894s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-289848 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-289848 image pull gcr.io/k8s-minikube/busybox: (2.259192385s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-289848
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-289848: (5.91585474s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-289848 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-289848 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (52.326878626s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-289848 image list
helpers_test.go:175: Cleaning up "test-preload-289848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-289848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-289848: (2.456305966s)
--- PASS: TestPreload (123.59s)

                                                
                                    
x
+
TestInsufficientStorage (13.18s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-846693 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-846693 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.532769746s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f710235b-fe4d-473c-9e90-0e1d0f766253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-846693] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ce83c73-a7f6-4980-8f8d-8e77e4a65820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21790"}}
	{"specversion":"1.0","id":"c232e4ef-b5ba-4eed-a7d7-c8422c2043bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"303bb601-8d72-412f-8018-13f8713a132a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig"}}
	{"specversion":"1.0","id":"280613a5-2e99-490e-891e-b56587a3d359","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube"}}
	{"specversion":"1.0","id":"713cdbc8-4034-4f80-872a-9d0373bf17c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d46b6391-746c-4c78-9ddd-281fba2cdda0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"243893bc-a9c2-4df4-88ee-315346b0ebf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"be8d0b49-4957-4a2d-a4dd-2de031dc793d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"649519e3-b1aa-41bf-a3b5-dbdb2dda127f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e9f7214-9026-40cf-9488-95135834305c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"74fc8605-38dc-4f1c-bd2c-70d42618d3a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-846693\" primary control-plane node in \"insufficient-storage-846693\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"261d5523-0c81-47db-83d5-7d9f29684f1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5270cc84-6e8d-4ee2-8e1b-451690d50af0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bd97dbb-f33e-40f4-a958-0c1872db10e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-846693 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-846693 --output=json --layout=cluster: exit status 7 (318.014445ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-846693","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-846693","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 22:52:23.963763  423049 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-846693" does not appear in /home/jenkins/minikube-integration/21790-269600/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-846693 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-846693 --output=json --layout=cluster: exit status 7 (313.92358ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-846693","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-846693","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 22:52:24.278108  423112 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-846693" does not appear in /home/jenkins/minikube-integration/21790-269600/kubeconfig
	E1027 22:52:24.288905  423112 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/insufficient-storage-846693/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-846693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-846693
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-846693: (2.009834053s)
--- PASS: TestInsufficientStorage (13.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2686321988 start -p running-upgrade-817807 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2686321988 start -p running-upgrade-817807 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (30.736674047s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-817807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-817807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.171611783s)
helpers_test.go:175: Cleaning up "running-upgrade-817807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-817807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-817807: (2.120550179s)
--- PASS: TestRunningBinaryUpgrade (60.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.179022188s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-923956
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-923956: (1.410904529s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-923956 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-923956 status --format={{.Host}}: exit status 7 (79.565081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1027 22:54:32.905261  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m51.184701112s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-923956 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (122.70509ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-923956] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-923956
	    minikube start -p kubernetes-upgrade-923956 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9239562 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-923956 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1027 22:59:32.905815  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-923956 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.894939192s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-923956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-923956
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-923956: (2.994069891s)
--- PASS: TestKubernetesUpgrade (350.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4246778098 start -p missing-upgrade-047626 --memory=3072 --driver=docker  --container-runtime=containerd
E1027 22:52:48.839110  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4246778098 start -p missing-upgrade-047626 --memory=3072 --driver=docker  --container-runtime=containerd: (59.899805374s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-047626
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-047626
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-047626 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-047626 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m53.315299801s)
helpers_test.go:175: Cleaning up "missing-upgrade-047626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-047626
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-047626: (1.956304561s)
--- PASS: TestMissingContainerUpgrade (177.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-230069 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-230069 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (97.726614ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-230069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-230069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-230069 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (47.741236572s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-230069 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-230069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-230069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (15.928297501s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-230069 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-230069 status -o json: exit status 2 (323.268573ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-230069","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-230069
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-230069: (2.041443177s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-230069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-230069 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.882939506s)
--- PASS: TestNoKubernetes/serial/Start (7.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-230069 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-230069 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.405706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-230069
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-230069: (1.300665559s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-230069 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-230069 --driver=docker  --container-runtime=containerd: (7.240183915s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-230069 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-230069 "sudo systemctl is-active --quiet service kubelet": exit status 1 (357.457124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1805460844 start -p stopped-upgrade-763387 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1027 22:55:51.906053  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1805460844 start -p stopped-upgrade-763387 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (30.750878848s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1805460844 -p stopped-upgrade-763387 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1805460844 -p stopped-upgrade-763387 stop: (1.251221443s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-763387 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-763387 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.821129931s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (65.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-763387
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-763387: (1.500384892s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.50s)

                                                
                                    
x
+
TestPause/serial/Start (80.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-567838 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1027 22:57:48.839156  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-567838 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m20.61290738s)
--- PASS: TestPause/serial/Start (80.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-567838 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-567838 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.242907532s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-567838 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-567838 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-567838 --output=json --layout=cluster: exit status 2 (343.739472ms)

                                                
                                                
-- stdout --
	{"Name":"pause-567838","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-567838","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-567838 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.16s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-567838 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-567838 --alsologtostderr -v=5: (1.155253177s)
--- PASS: TestPause/serial/PauseAgain (1.16s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-567838 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-567838 --alsologtostderr -v=5: (3.086128434s)
--- PASS: TestPause/serial/DeletePaused (3.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-567838
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-567838: exit status 1 (20.656739ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-567838: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-521323 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-521323 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (263.14531ms)

                                                
                                                
-- stdout --
	* [false-521323] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:59:50.514718  465788 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:59:50.515313  465788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:59:50.515350  465788 out.go:374] Setting ErrFile to fd 2...
	I1027 22:59:50.515370  465788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:59:50.515683  465788 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-269600/.minikube/bin
	I1027 22:59:50.516184  465788 out.go:368] Setting JSON to false
	I1027 22:59:50.520554  465788 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9741,"bootTime":1761596250,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1027 22:59:50.520673  465788 start.go:143] virtualization:  
	I1027 22:59:50.524487  465788 out.go:179] * [false-521323] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:59:50.527825  465788 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:59:50.527904  465788 notify.go:221] Checking for updates...
	I1027 22:59:50.534352  465788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:59:50.537317  465788 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-269600/kubeconfig
	I1027 22:59:50.540234  465788 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-269600/.minikube
	I1027 22:59:50.543752  465788 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:59:50.546805  465788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:59:50.550323  465788 config.go:182] Loaded profile config "force-systemd-flag-246216": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1027 22:59:50.550511  465788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:59:50.586711  465788 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:59:50.586825  465788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:59:50.675064  465788 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 22:59:50.664608829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:59:50.675166  465788 docker.go:318] overlay module found
	I1027 22:59:50.678481  465788 out.go:179] * Using the docker driver based on user configuration
	I1027 22:59:50.681783  465788 start.go:307] selected driver: docker
	I1027 22:59:50.681807  465788 start.go:928] validating driver "docker" against <nil>
	I1027 22:59:50.681829  465788 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:59:50.685318  465788 out.go:203] 
	W1027 22:59:50.688279  465788 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1027 22:59:50.691153  465788 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-521323 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-521323" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-269600/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:59:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-246216
contexts:
- context:
cluster: force-systemd-flag-246216
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:59:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-246216
name: force-systemd-flag-246216
current-context: force-systemd-flag-246216
kind: Config
preferences: {}
users:
- name: force-systemd-flag-246216
user:
client-certificate: /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/force-systemd-flag-246216/client.crt
client-key: /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/force-systemd-flag-246216/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-521323

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-521323"

                                                
                                                
----------------------- debugLogs end: false-521323 [took: 5.196130057s] --------------------------------
helpers_test.go:175: Cleaning up "false-521323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-521323
--- PASS: TestNetworkPlugins/group/false (5.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-546957 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-546957 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m0.56255698s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-546957 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c97adc77-a387-44e6-9c24-335350d7b34b] Pending
helpers_test.go:352: "busybox" [c97adc77-a387-44e6-9c24-335350d7b34b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c97adc77-a387-44e6-9c24-335350d7b34b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00298437s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-546957 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-546957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-546957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057209593s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-546957 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-546957 --alsologtostderr -v=3
E1027 23:02:35.976143  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-546957 --alsologtostderr -v=3: (12.066307572s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-546957 -n old-k8s-version-546957
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-546957 -n old-k8s-version-546957: exit status 7 (82.109553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-546957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (28.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-546957 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1027 23:02:48.839195  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-546957 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (28.316197138s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-546957 -n old-k8s-version-546957
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (28.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-zft44" [8b7b7c7a-faa8-4c1a-b3eb-a99a5649282c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-zft44" [8b7b7c7a-faa8-4c1a-b3eb-a99a5649282c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.003595387s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-zft44" [8b7b7c7a-faa8-4c1a-b3eb-a99a5649282c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008884743s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-546957 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-546957 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-546957 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-546957 -n old-k8s-version-546957
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-546957 -n old-k8s-version-546957: exit status 2 (358.487612ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-546957 -n old-k8s-version-546957
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-546957 -n old-k8s-version-546957: exit status 2 (346.777334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-546957 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-546957 -n old-k8s-version-546957
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-546957 -n old-k8s-version-546957
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-833544 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-833544 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m27.634175366s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-752077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1027 23:04:32.905137  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-752077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (56.692687158s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-752077 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [150b0ef8-d7d4-4ece-8b73-89d9c394fa77] Pending
helpers_test.go:352: "busybox" [150b0ef8-d7d4-4ece-8b73-89d9c394fa77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [150b0ef8-d7d4-4ece-8b73-89d9c394fa77] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004192237s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-752077 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-752077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-752077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.111044927s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-752077 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-752077 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-752077 --alsologtostderr -v=3: (12.514870805s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-833544 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a29055bd-33a3-41ff-b1b0-d20c3d53e243] Pending
helpers_test.go:352: "busybox" [a29055bd-33a3-41ff-b1b0-d20c3d53e243] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a29055bd-33a3-41ff-b1b0-d20c3d53e243] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003140068s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-833544 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-833544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-833544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.484380187s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-833544 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-752077 -n embed-certs-752077
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-752077 -n embed-certs-752077: exit status 7 (138.173707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-752077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-752077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-752077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.232199497s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-752077 -n embed-certs-752077
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-833544 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-833544 --alsologtostderr -v=3: (12.385932227s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544: exit status 7 (159.520984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-833544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-833544 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-833544 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.215872849s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2c7mz" [454b7a95-ce75-4fc5-83d1-ebd4f83af231] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002961136s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2c7mz" [454b7a95-ce75-4fc5-83d1-ebd4f83af231] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003445744s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-752077 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-752077 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-752077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-752077 -n embed-certs-752077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-752077 -n embed-certs-752077: exit status 2 (363.012745ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-752077 -n embed-certs-752077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-752077 -n embed-certs-752077: exit status 2 (370.537339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-752077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-752077 -n embed-certs-752077
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-752077 -n embed-certs-752077
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9s76j" [a920d62e-04e2-4950-831f-4d36d9e040ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004844299s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-386286 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-386286 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m10.227586728s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9s76j" [a920d62e-04e2-4950-831f-4d36d9e040ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003269138s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-833544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-833544 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-833544 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544: exit status 2 (412.631199ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544: exit status 2 (407.284874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-833544 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-833544 -n default-k8s-diff-port-833544
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-318403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1027 23:07:24.605148  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:24.611701  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:24.623152  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:24.644589  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:24.686166  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:24.767605  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:24.928903  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:25.250988  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:25.892655  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:27.174083  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:07:29.736261  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-318403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.443211717s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-318403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-318403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.010902207s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-318403 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-318403 --alsologtostderr -v=3: (1.407314617s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-318403 -n newest-cni-318403
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-318403 -n newest-cni-318403: exit status 7 (68.196254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-318403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-318403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1027 23:07:34.857579  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-318403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (17.403514286s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-318403 -n newest-cni-318403
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-386286 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [33093963-21d7-45e8-baaf-258b9342f522] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [33093963-21d7-45e8-baaf-258b9342f522] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004050402s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-386286 exec busybox -- /bin/sh -c "ulimit -n"
E1027 23:07:45.099276  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-386286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-386286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.521182848s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-386286 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-386286 --alsologtostderr -v=3
E1027 23:07:48.838590  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-386286 --alsologtostderr -v=3: (14.674762335s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-318403 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-318403 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-318403 -n newest-cni-318403
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-318403 -n newest-cni-318403: exit status 2 (331.555617ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-318403 -n newest-cni-318403
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-318403 -n newest-cni-318403: exit status 2 (344.496459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-318403 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-318403 -n newest-cni-318403
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-318403 -n newest-cni-318403
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m29.950183867s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-386286 -n no-preload-386286
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-386286 -n no-preload-386286: exit status 7 (93.412888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-386286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-386286 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1027 23:08:05.580993  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:08:46.543220  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-386286 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.241108298s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-386286 -n no-preload-386286
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-85b6t" [10ea2c8a-e09e-4866-8037-0f9ccfa0c3d2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003154012s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-85b6t" [10ea2c8a-e09e-4866-8037-0f9ccfa0c3d2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009454034s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-386286 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-386286 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-386286 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-386286 -n no-preload-386286
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-386286 -n no-preload-386286: exit status 2 (357.541945ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-386286 -n no-preload-386286
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-386286 -n no-preload-386286: exit status 2 (336.362114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-386286 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-386286 -n no-preload-386286
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-386286 -n no-preload-386286
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)
E1027 23:14:48.340110  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/auto-521323/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m24.076809209s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-521323 "pgrep -a kubelet"
I1027 23:09:27.500030  271448 config.go:182] Loaded profile config "auto-521323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-521323 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b9nzx" [117a440d-fb33-45a8-9f0c-e46130f761b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b9nzx" [117a440d-fb33-45a8-9f0c-e46130f761b5] Running
E1027 23:09:32.905670  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004785421s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-521323 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1027 23:10:04.673301  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:04.679648  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:04.691071  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:04.712444  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:04.753814  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:04.835183  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:04.996704  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:05.318144  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:05.959934  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:07.241418  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:08.464932  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:09.802889  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:14.924493  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:10:25.166398  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (56.284715304s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lswvr" [770a9782-0807-4f0f-bce9-442e499346d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004257656s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-521323 "pgrep -a kubelet"
E1027 23:10:45.648309  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1027 23:10:45.997944  271448 config.go:182] Loaded profile config "kindnet-521323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-521323 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rmx29" [d3559a7c-e65e-4a17-bfc5-ceba2b6e23c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rmx29" [d3559a7c-e65e-4a17-bfc5-ceba2b6e23c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003430812s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-521323 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-lzl7x" [05b94d17-3911-4d7a-b440-ee91627cffdf] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006823203s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-521323 "pgrep -a kubelet"
I1027 23:11:05.637907  271448 config.go:182] Loaded profile config "calico-521323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-521323 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w9x9j" [259fde4c-2ac6-478b-a79a-148d672acf34] Pending
helpers_test.go:352: "netcat-cd4db9dbf-w9x9j" [259fde4c-2ac6-478b-a79a-148d672acf34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003374345s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-521323 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1027 23:11:26.610726  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/default-k8s-diff-port-833544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m9.00238578s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1027 23:12:24.606152  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/old-k8s-version-546957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m20.503001157s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-521323 "pgrep -a kubelet"
I1027 23:12:30.604703  271448 config.go:182] Loaded profile config "custom-flannel-521323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-521323 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qh58c" [a171e188-e11c-4ff1-9b98-106ddb0d6754] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 23:12:31.907416  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/addons-437249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qh58c" [a171e188-e11c-4ff1-9b98-106ddb0d6754] Running
E1027 23:12:35.914061  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:35.920586  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:35.932039  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:35.953633  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:35.995058  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:36.076475  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:36.238006  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:36.559765  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:37.202162  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:12:38.483951  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004371401s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-521323 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.498730397s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-521323 "pgrep -a kubelet"
I1027 23:13:06.406980  271448 config.go:182] Loaded profile config "enable-default-cni-521323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-521323 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dpk8w" [36290af4-99e7-4b44-9ff4-3fbddef944da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dpk8w" [36290af4-99e7-4b44-9ff4-3fbddef944da] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00502798s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-521323 exec deployment/netcat -- nslookup kubernetes.default
E1027 23:13:16.889879  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1027 23:13:57.851988  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/no-preload-386286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-521323 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.083559904s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-p9rc5" [e0e654cf-e880-4564-966d-95b1fa101bad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006097199s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-521323 "pgrep -a kubelet"
I1027 23:14:12.225791  271448 config.go:182] Loaded profile config "flannel-521323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-521323 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7tj4s" [d0eaf32c-4ff5-4cce-89c3-f998e2ded9b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7tj4s" [d0eaf32c-4ff5-4cce-89c3-f998e2ded9b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.002886986s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-521323 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-521323 "pgrep -a kubelet"
E1027 23:14:30.414524  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/auto-521323/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1027 23:14:30.811182  271448 config.go:182] Loaded profile config "bridge-521323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-521323 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2xvrb" [68c31751-151e-44fc-8a4f-e1d6543a9a0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 23:14:32.905831  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/functional-735759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:14:32.977192  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/auto-521323/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2xvrb" [68c31751-151e-44fc-8a4f-e1d6543a9a0f] Running
E1027 23:14:38.098545  271448 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-269600/.minikube/profiles/auto-521323/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004173973s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-521323 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-521323 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (30/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-081277 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-081277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-081277
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:34: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-962052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-962052
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-521323 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-521323" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-521323

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-521323"

                                                
                                                
----------------------- debugLogs end: kubenet-521323 [took: 5.456212409s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-521323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-521323
--- SKIP: TestNetworkPlugins/group/kubenet (5.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-521323 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-521323" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-521323

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-521323" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-521323"

                                                
                                                
----------------------- debugLogs end: cilium-521323 [took: 6.71712668s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-521323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-521323
--- SKIP: TestNetworkPlugins/group/cilium (7.03s)

                                                
                                    
Copied to clipboard