Test Report: Docker_Linux_containerd_arm64 21835

                    
                      73e6d6839bae6cdde957e116826ac4e2fc7d714a:2025-11-01:42153
                    
                

Test fail (1/332)

Order failed test Duration
250 TestScheduledStopUnix 41.62
x
+
TestScheduledStopUnix (41.62s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-599041 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-599041 --memory=3072 --driver=docker  --container-runtime=containerd: (36.446855721s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-599041 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-599041 -n scheduled-stop-599041
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-599041 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 152898 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-11-01 09:05:35.750794877 +0000 UTC m=+2170.839262175
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-599041
helpers_test.go:243: (dbg) docker inspect scheduled-stop-599041:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "769d2cdeb9d73a580a908770ec6171a9277e0dbcfe105b256454403f549ab5c1",
	        "Created": "2025-11-01T09:05:04.673121421Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 150909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:05:04.749577116Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/769d2cdeb9d73a580a908770ec6171a9277e0dbcfe105b256454403f549ab5c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/769d2cdeb9d73a580a908770ec6171a9277e0dbcfe105b256454403f549ab5c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/769d2cdeb9d73a580a908770ec6171a9277e0dbcfe105b256454403f549ab5c1/hosts",
	        "LogPath": "/var/lib/docker/containers/769d2cdeb9d73a580a908770ec6171a9277e0dbcfe105b256454403f549ab5c1/769d2cdeb9d73a580a908770ec6171a9277e0dbcfe105b256454403f549ab5c1-json.log",
	        "Name": "/scheduled-stop-599041",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-599041:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-599041",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "769d2cdeb9d73a580a908770ec6171a9277e0dbcfe105b256454403f549ab5c1",
	                "LowerDir": "/var/lib/docker/overlay2/b5b3d19e1e23a1e4229284b8e48621fffb114388b12f8f5843022a577e60a9ad-init/diff:/var/lib/docker/overlay2/2ae9db781f71f6b40134c14ce962b520e95fb32a2be583edc8b9ca9696e3b6fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b3d19e1e23a1e4229284b8e48621fffb114388b12f8f5843022a577e60a9ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b3d19e1e23a1e4229284b8e48621fffb114388b12f8f5843022a577e60a9ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b3d19e1e23a1e4229284b8e48621fffb114388b12f8f5843022a577e60a9ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-599041",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-599041/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-599041",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-599041",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-599041",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db98d887e68ca8a792f88f370660ba5a7b45050403f48d0f8fe7ca97f3075c2d",
	            "SandboxKey": "/var/run/docker/netns/db98d887e68c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-599041": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:5d:09:d8:c4:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1423ff4e7bda50d7f2b4443b86acd1e88ce4d225f3212189f44a33bd0c1d1b50",
	                    "EndpointID": "7b2293b7b7677fc70db958d9b7bee067a6c8a0cd606399196af66c25f4fe3087",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-599041",
	                        "769d2cdeb9d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-599041 -n scheduled-stop-599041
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-599041 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-599041 logs -n 25: (1.072269211s)
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-819483                                                                                                                                             │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 08:59 UTC │ 01 Nov 25 09:00 UTC │
	│ start   │ -p multinode-819483 --wait=true -v=5 --alsologtostderr                                                                                                          │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:00 UTC │ 01 Nov 25 09:00 UTC │
	│ node    │ list -p multinode-819483                                                                                                                                        │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:00 UTC │                     │
	│ node    │ multinode-819483 node delete m03                                                                                                                                │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:00 UTC │ 01 Nov 25 09:00 UTC │
	│ stop    │ multinode-819483 stop                                                                                                                                           │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:00 UTC │ 01 Nov 25 09:01 UTC │
	│ start   │ -p multinode-819483 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd                                                          │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:01 UTC │ 01 Nov 25 09:02 UTC │
	│ node    │ list -p multinode-819483                                                                                                                                        │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:02 UTC │                     │
	│ start   │ -p multinode-819483-m02 --driver=docker  --container-runtime=containerd                                                                                         │ multinode-819483-m02  │ jenkins │ v1.37.0 │ 01 Nov 25 09:02 UTC │                     │
	│ start   │ -p multinode-819483-m03 --driver=docker  --container-runtime=containerd                                                                                         │ multinode-819483-m03  │ jenkins │ v1.37.0 │ 01 Nov 25 09:02 UTC │ 01 Nov 25 09:02 UTC │
	│ node    │ add -p multinode-819483                                                                                                                                         │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:02 UTC │                     │
	│ delete  │ -p multinode-819483-m03                                                                                                                                         │ multinode-819483-m03  │ jenkins │ v1.37.0 │ 01 Nov 25 09:02 UTC │ 01 Nov 25 09:02 UTC │
	│ delete  │ -p multinode-819483                                                                                                                                             │ multinode-819483      │ jenkins │ v1.37.0 │ 01 Nov 25 09:02 UTC │ 01 Nov 25 09:02 UTC │
	│ start   │ -p test-preload-208683 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0 │ test-preload-208683   │ jenkins │ v1.37.0 │ 01 Nov 25 09:02 UTC │ 01 Nov 25 09:03 UTC │
	│ image   │ test-preload-208683 image pull gcr.io/k8s-minikube/busybox                                                                                                      │ test-preload-208683   │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ stop    │ -p test-preload-208683                                                                                                                                          │ test-preload-208683   │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:04 UTC │
	│ start   │ -p test-preload-208683 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd                                         │ test-preload-208683   │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ image   │ test-preload-208683 image list                                                                                                                                  │ test-preload-208683   │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ delete  │ -p test-preload-208683                                                                                                                                          │ test-preload-208683   │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ start   │ -p scheduled-stop-599041 --memory=3072 --driver=docker  --container-runtime=containerd                                                                          │ scheduled-stop-599041 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:05 UTC │
	│ stop    │ -p scheduled-stop-599041 --schedule 5m                                                                                                                          │ scheduled-stop-599041 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │                     │
	│ stop    │ -p scheduled-stop-599041 --schedule 5m                                                                                                                          │ scheduled-stop-599041 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │                     │
	│ stop    │ -p scheduled-stop-599041 --schedule 5m                                                                                                                          │ scheduled-stop-599041 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │                     │
	│ stop    │ -p scheduled-stop-599041 --schedule 15s                                                                                                                         │ scheduled-stop-599041 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │                     │
	│ stop    │ -p scheduled-stop-599041 --schedule 15s                                                                                                                         │ scheduled-stop-599041 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │                     │
	│ stop    │ -p scheduled-stop-599041 --schedule 15s                                                                                                                         │ scheduled-stop-599041 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:04:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:04:58.821876  150525 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:04:58.821977  150525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:58.821981  150525 out.go:374] Setting ErrFile to fd 2...
	I1101 09:04:58.821984  150525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:58.822245  150525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 09:04:58.822661  150525 out.go:368] Setting JSON to false
	I1101 09:04:58.823549  150525 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2848,"bootTime":1761985051,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:04:58.823605  150525 start.go:143] virtualization:  
	I1101 09:04:58.827493  150525 out.go:179] * [scheduled-stop-599041] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:04:58.832221  150525 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:04:58.832281  150525 notify.go:221] Checking for updates...
	I1101 09:04:58.838842  150525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:04:58.842165  150525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	I1101 09:04:58.845451  150525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	I1101 09:04:58.848547  150525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:04:58.851613  150525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:04:58.854965  150525 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:04:58.883919  150525 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:04:58.884025  150525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:04:58.940944  150525 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:04:58.931880007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:04:58.941045  150525 docker.go:319] overlay module found
	I1101 09:04:58.946326  150525 out.go:179] * Using the docker driver based on user configuration
	I1101 09:04:58.949394  150525 start.go:309] selected driver: docker
	I1101 09:04:58.949404  150525 start.go:930] validating driver "docker" against <nil>
	I1101 09:04:58.949415  150525 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:04:58.950262  150525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:04:59.008846  150525 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:04:58.99971848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:04:59.009000  150525 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:04:59.009213  150525 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:04:59.012277  150525 out.go:179] * Using Docker driver with root privileges
	I1101 09:04:59.015278  150525 cni.go:84] Creating CNI manager for ""
	I1101 09:04:59.015342  150525 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 09:04:59.015357  150525 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:04:59.015440  150525 start.go:353] cluster config:
	{Name:scheduled-stop-599041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-599041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:04:59.020460  150525 out.go:179] * Starting "scheduled-stop-599041" primary control-plane node in "scheduled-stop-599041" cluster
	I1101 09:04:59.023671  150525 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1101 09:04:59.026603  150525 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:04:59.029715  150525 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 09:04:59.029735  150525 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:04:59.029767  150525 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1101 09:04:59.029775  150525 cache.go:59] Caching tarball of preloaded images
	I1101 09:04:59.029873  150525 preload.go:233] Found /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1101 09:04:59.029881  150525 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1101 09:04:59.030220  150525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/config.json ...
	I1101 09:04:59.030238  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/config.json: {Name:mka44dd4be43300ac877ef95aed7e04a7d6c4d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:04:59.048416  150525 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:04:59.048428  150525 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:04:59.048446  150525 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:04:59.048478  150525 start.go:360] acquireMachinesLock for scheduled-stop-599041: {Name:mk673be61ba470c7862ae4fbcae3e56f8e83cebd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:04:59.048594  150525 start.go:364] duration metric: took 102.262µs to acquireMachinesLock for "scheduled-stop-599041"
	I1101 09:04:59.048619  150525 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-599041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-599041 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 09:04:59.048683  150525 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:04:59.053779  150525 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:04:59.054000  150525 start.go:159] libmachine.API.Create for "scheduled-stop-599041" (driver="docker")
	I1101 09:04:59.054027  150525 client.go:173] LocalClient.Create starting
	I1101 09:04:59.054109  150525 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca.pem
	I1101 09:04:59.054144  150525 main.go:143] libmachine: Decoding PEM data...
	I1101 09:04:59.054162  150525 main.go:143] libmachine: Parsing certificate...
	I1101 09:04:59.054210  150525 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2307/.minikube/certs/cert.pem
	I1101 09:04:59.054227  150525 main.go:143] libmachine: Decoding PEM data...
	I1101 09:04:59.054248  150525 main.go:143] libmachine: Parsing certificate...
	I1101 09:04:59.054593  150525 cli_runner.go:164] Run: docker network inspect scheduled-stop-599041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:04:59.069998  150525 cli_runner.go:211] docker network inspect scheduled-stop-599041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:04:59.070076  150525 network_create.go:284] running [docker network inspect scheduled-stop-599041] to gather additional debugging logs...
	I1101 09:04:59.070092  150525 cli_runner.go:164] Run: docker network inspect scheduled-stop-599041
	W1101 09:04:59.086651  150525 cli_runner.go:211] docker network inspect scheduled-stop-599041 returned with exit code 1
	I1101 09:04:59.086689  150525 network_create.go:287] error running [docker network inspect scheduled-stop-599041]: docker network inspect scheduled-stop-599041: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-599041 not found
	I1101 09:04:59.086699  150525 network_create.go:289] output of [docker network inspect scheduled-stop-599041]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-599041 not found
	
	** /stderr **
	I1101 09:04:59.086812  150525 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:04:59.103027  150525 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-519f9941df81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:56:5d:1d:ec:84} reservation:<nil>}
	I1101 09:04:59.103244  150525 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4e7f056af18f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:1b:39:7e:aa:dd} reservation:<nil>}
	I1101 09:04:59.103507  150525 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-cbe92f0bc81a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:8d:b8:d6:a8:8c} reservation:<nil>}
	I1101 09:04:59.103826  150525 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a680}
	I1101 09:04:59.103841  150525 network_create.go:124] attempt to create docker network scheduled-stop-599041 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:04:59.103894  150525 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-599041 scheduled-stop-599041
	I1101 09:04:59.167449  150525 network_create.go:108] docker network scheduled-stop-599041 192.168.76.0/24 created
	I1101 09:04:59.167470  150525 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-599041" container
	I1101 09:04:59.167542  150525 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:04:59.183697  150525 cli_runner.go:164] Run: docker volume create scheduled-stop-599041 --label name.minikube.sigs.k8s.io=scheduled-stop-599041 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:04:59.202370  150525 oci.go:103] Successfully created a docker volume scheduled-stop-599041
	I1101 09:04:59.202456  150525 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-599041-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-599041 --entrypoint /usr/bin/test -v scheduled-stop-599041:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:04:59.763149  150525 oci.go:107] Successfully prepared a docker volume scheduled-stop-599041
	I1101 09:04:59.763197  150525 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 09:04:59.763216  150525 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:04:59.763279  150525 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-599041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:05:04.601661  150525 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-599041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.838333664s)
	I1101 09:05:04.601681  150525 kic.go:203] duration metric: took 4.838462191s to extract preloaded images to volume ...
	W1101 09:05:04.601843  150525 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:05:04.601944  150525 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:05:04.657508  150525 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-599041 --name scheduled-stop-599041 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-599041 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-599041 --network scheduled-stop-599041 --ip 192.168.76.2 --volume scheduled-stop-599041:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:05:04.965832  150525 cli_runner.go:164] Run: docker container inspect scheduled-stop-599041 --format={{.State.Running}}
	I1101 09:05:04.987794  150525 cli_runner.go:164] Run: docker container inspect scheduled-stop-599041 --format={{.State.Status}}
	I1101 09:05:05.014931  150525 cli_runner.go:164] Run: docker exec scheduled-stop-599041 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:05:05.063958  150525 oci.go:144] the created container "scheduled-stop-599041" has a running status.
	I1101 09:05:05.063977  150525 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa...
	I1101 09:05:05.673173  150525 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:05:05.692931  150525 cli_runner.go:164] Run: docker container inspect scheduled-stop-599041 --format={{.State.Status}}
	I1101 09:05:05.709936  150525 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:05:05.709948  150525 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-599041 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:05:05.753593  150525 cli_runner.go:164] Run: docker container inspect scheduled-stop-599041 --format={{.State.Status}}
	I1101 09:05:05.770201  150525 machine.go:94] provisionDockerMachine start ...
	I1101 09:05:05.770290  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:05.786579  150525 main.go:143] libmachine: Using SSH client type: native
	I1101 09:05:05.786899  150525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I1101 09:05:05.786905  150525 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:05:05.787559  150525 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:05:08.941452  150525 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-599041
	
	I1101 09:05:08.941467  150525 ubuntu.go:182] provisioning hostname "scheduled-stop-599041"
	I1101 09:05:08.941538  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:08.959396  150525 main.go:143] libmachine: Using SSH client type: native
	I1101 09:05:08.959703  150525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I1101 09:05:08.959713  150525 main.go:143] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-599041 && echo "scheduled-stop-599041" | sudo tee /etc/hostname
	I1101 09:05:09.120026  150525 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-599041
	
	I1101 09:05:09.120093  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:09.153541  150525 main.go:143] libmachine: Using SSH client type: native
	I1101 09:05:09.153894  150525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I1101 09:05:09.153910  150525 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-599041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-599041/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-599041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:05:09.302071  150525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:05:09.302102  150525 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2307/.minikube}
	I1101 09:05:09.302123  150525 ubuntu.go:190] setting up certificates
	I1101 09:05:09.302131  150525 provision.go:84] configureAuth start
	I1101 09:05:09.302190  150525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-599041
	I1101 09:05:09.320516  150525 provision.go:143] copyHostCerts
	I1101 09:05:09.320570  150525 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2307/.minikube/ca.pem, removing ...
	I1101 09:05:09.320577  150525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2307/.minikube/ca.pem
	I1101 09:05:09.320650  150525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2307/.minikube/ca.pem (1082 bytes)
	I1101 09:05:09.320745  150525 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2307/.minikube/cert.pem, removing ...
	I1101 09:05:09.320748  150525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2307/.minikube/cert.pem
	I1101 09:05:09.320774  150525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2307/.minikube/cert.pem (1123 bytes)
	I1101 09:05:09.320824  150525 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2307/.minikube/key.pem, removing ...
	I1101 09:05:09.320828  150525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2307/.minikube/key.pem
	I1101 09:05:09.320849  150525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2307/.minikube/key.pem (1679 bytes)
	I1101 09:05:09.320892  150525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-599041 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-599041]
	I1101 09:05:10.059114  150525 provision.go:177] copyRemoteCerts
	I1101 09:05:10.059171  150525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:05:10.059211  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:10.076964  150525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa Username:docker}
	I1101 09:05:10.185742  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:05:10.203020  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 09:05:10.220881  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:05:10.238505  150525 provision.go:87] duration metric: took 936.350408ms to configureAuth
	I1101 09:05:10.238522  150525 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:05:10.238704  150525 config.go:182] Loaded profile config "scheduled-stop-599041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 09:05:10.238710  150525 machine.go:97] duration metric: took 4.468501073s to provisionDockerMachine
	I1101 09:05:10.238716  150525 client.go:176] duration metric: took 11.184684455s to LocalClient.Create
	I1101 09:05:10.238735  150525 start.go:167] duration metric: took 11.18473495s to libmachine.API.Create "scheduled-stop-599041"
	I1101 09:05:10.238741  150525 start.go:293] postStartSetup for "scheduled-stop-599041" (driver="docker")
	I1101 09:05:10.238749  150525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:05:10.238802  150525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:05:10.238850  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:10.255899  150525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa Username:docker}
	I1101 09:05:10.361672  150525 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:05:10.365054  150525 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:05:10.365075  150525 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:05:10.365084  150525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2307/.minikube/addons for local assets ...
	I1101 09:05:10.365143  150525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2307/.minikube/files for local assets ...
	I1101 09:05:10.365228  150525 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2307/.minikube/files/etc/ssl/certs/41072.pem -> 41072.pem in /etc/ssl/certs
	I1101 09:05:10.365326  150525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:05:10.372476  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/files/etc/ssl/certs/41072.pem --> /etc/ssl/certs/41072.pem (1708 bytes)
	I1101 09:05:10.389769  150525 start.go:296] duration metric: took 151.009171ms for postStartSetup
	I1101 09:05:10.390116  150525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-599041
	I1101 09:05:10.406778  150525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/config.json ...
	I1101 09:05:10.407060  150525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:05:10.407104  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:10.423778  150525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa Username:docker}
	I1101 09:05:10.522749  150525 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:05:10.527326  150525 start.go:128] duration metric: took 11.478629645s to createHost
	I1101 09:05:10.527340  150525 start.go:83] releasing machines lock for "scheduled-stop-599041", held for 11.478739194s
	I1101 09:05:10.527417  150525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-599041
	I1101 09:05:10.544789  150525 ssh_runner.go:195] Run: cat /version.json
	I1101 09:05:10.544867  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:10.545109  150525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:05:10.545160  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:10.561623  150525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa Username:docker}
	I1101 09:05:10.567212  150525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa Username:docker}
	I1101 09:05:10.661800  150525 ssh_runner.go:195] Run: systemctl --version
	I1101 09:05:10.760453  150525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:05:10.764863  150525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:05:10.764920  150525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:05:10.792852  150525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:05:10.792865  150525 start.go:496] detecting cgroup driver to use...
	I1101 09:05:10.792897  150525 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:05:10.792950  150525 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 09:05:10.808265  150525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 09:05:10.821088  150525 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:05:10.821139  150525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:05:10.837150  150525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:05:10.855925  150525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:05:10.974984  150525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:05:11.098725  150525 docker.go:234] disabling docker service ...
	I1101 09:05:11.098812  150525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:05:11.122217  150525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:05:11.136771  150525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:05:11.253793  150525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:05:11.374168  150525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:05:11.387382  150525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:05:11.401008  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1101 09:05:11.410112  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 09:05:11.418692  150525 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 09:05:11.418752  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 09:05:11.427397  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 09:05:11.436037  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 09:05:11.444764  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 09:05:11.453255  150525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:05:11.461381  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 09:05:11.470194  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1101 09:05:11.478863  150525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1101 09:05:11.487828  150525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:05:11.495475  150525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:05:11.502890  150525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:05:11.612108  150525 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 09:05:11.756972  150525 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 09:05:11.757037  150525 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 09:05:11.761078  150525 start.go:564] Will wait 60s for crictl version
	I1101 09:05:11.761141  150525 ssh_runner.go:195] Run: which crictl
	I1101 09:05:11.764566  150525 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:05:11.792103  150525 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1101 09:05:11.792174  150525 ssh_runner.go:195] Run: containerd --version
	I1101 09:05:11.815075  150525 ssh_runner.go:195] Run: containerd --version
	I1101 09:05:11.842983  150525 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1101 09:05:11.846025  150525 cli_runner.go:164] Run: docker network inspect scheduled-stop-599041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:05:11.862301  150525 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:05:11.866104  150525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:05:11.876404  150525 kubeadm.go:884] updating cluster {Name:scheduled-stop-599041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-599041 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:05:11.876513  150525 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 09:05:11.876583  150525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:05:11.901121  150525 containerd.go:627] all images are preloaded for containerd runtime.
	I1101 09:05:11.901133  150525 containerd.go:534] Images already preloaded, skipping extraction
	I1101 09:05:11.901193  150525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:05:11.925562  150525 containerd.go:627] all images are preloaded for containerd runtime.
	I1101 09:05:11.925575  150525 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:05:11.925582  150525 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1101 09:05:11.925669  150525 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-599041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-599041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:05:11.925752  150525 ssh_runner.go:195] Run: sudo crictl info
	I1101 09:05:11.952851  150525 cni.go:84] Creating CNI manager for ""
	I1101 09:05:11.952862  150525 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 09:05:11.952883  150525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:05:11.952904  150525 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-599041 NodeName:scheduled-stop-599041 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:05:11.953013  150525 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "scheduled-stop-599041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:05:11.953094  150525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:05:11.960892  150525 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:05:11.960953  150525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:05:11.968500  150525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1101 09:05:11.981281  150525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:05:11.995762  150525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1101 09:05:12.011084  150525 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:05:12.015186  150525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:05:12.025285  150525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:05:12.135453  150525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:05:12.152271  150525 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041 for IP: 192.168.76.2
	I1101 09:05:12.152281  150525 certs.go:195] generating shared ca certs ...
	I1101 09:05:12.152295  150525 certs.go:227] acquiring lock for ca certs: {Name:mk6850b6a29536d9828e4f0f9b1ede9faf3180b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:12.152451  150525 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2307/.minikube/ca.key
	I1101 09:05:12.152490  150525 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2307/.minikube/proxy-client-ca.key
	I1101 09:05:12.152496  150525 certs.go:257] generating profile certs ...
	I1101 09:05:12.152550  150525 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/client.key
	I1101 09:05:12.152559  150525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/client.crt with IP's: []
	I1101 09:05:13.794029  150525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/client.crt ...
	I1101 09:05:13.794045  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/client.crt: {Name:mkb7f90bc6723b8dbd945928ee29c0ee103c370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:13.794254  150525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/client.key ...
	I1101 09:05:13.794262  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/client.key: {Name:mk9cfe17088238a1950a2270fb89f71aae682ea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:13.794363  150525 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.key.cdac70e0
	I1101 09:05:13.794379  150525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.crt.cdac70e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:05:15.341317  150525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.crt.cdac70e0 ...
	I1101 09:05:15.341332  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.crt.cdac70e0: {Name:mk07eb870f965531247aa2d264dd9102c9defe01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:15.341524  150525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.key.cdac70e0 ...
	I1101 09:05:15.341532  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.key.cdac70e0: {Name:mk0af640416ce30af243d4dde4d977deb290ebbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:15.341617  150525 certs.go:382] copying /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.crt.cdac70e0 -> /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.crt
	I1101 09:05:15.341747  150525 certs.go:386] copying /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.key.cdac70e0 -> /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.key
	I1101 09:05:15.341822  150525 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.key
	I1101 09:05:15.341835  150525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.crt with IP's: []
	I1101 09:05:15.552716  150525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.crt ...
	I1101 09:05:15.552730  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.crt: {Name:mk3abfadda991c8280d486870a29a34664acd426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:15.552921  150525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.key ...
	I1101 09:05:15.552928  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.key: {Name:mk4305c4db6257a9d091465139b7460e126806a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:15.553133  150525 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/4107.pem (1338 bytes)
	W1101 09:05:15.553174  150525 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2307/.minikube/certs/4107_empty.pem, impossibly tiny 0 bytes
	I1101 09:05:15.553181  150525 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:05:15.553205  150525 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:05:15.553225  150525 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:05:15.553254  150525 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2307/.minikube/certs/key.pem (1679 bytes)
	I1101 09:05:15.553297  150525 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2307/.minikube/files/etc/ssl/certs/41072.pem (1708 bytes)
	I1101 09:05:15.553925  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:05:15.573727  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:05:15.592568  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:05:15.612127  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:05:15.630426  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:05:15.649237  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:05:15.666581  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:05:15.684437  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/scheduled-stop-599041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:05:15.701533  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/files/etc/ssl/certs/41072.pem --> /usr/share/ca-certificates/41072.pem (1708 bytes)
	I1101 09:05:15.719352  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:05:15.737069  150525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2307/.minikube/certs/4107.pem --> /usr/share/ca-certificates/4107.pem (1338 bytes)
	I1101 09:05:15.755141  150525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:05:15.768463  150525 ssh_runner.go:195] Run: openssl version
	I1101 09:05:15.774825  150525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41072.pem && ln -fs /usr/share/ca-certificates/41072.pem /etc/ssl/certs/41072.pem"
	I1101 09:05:15.783429  150525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41072.pem
	I1101 09:05:15.787589  150525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/41072.pem
	I1101 09:05:15.787657  150525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41072.pem
	I1101 09:05:15.829140  150525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41072.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:05:15.838115  150525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:05:15.846884  150525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:05:15.850847  150525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:05:15.850902  150525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:05:15.892430  150525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:05:15.902232  150525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4107.pem && ln -fs /usr/share/ca-certificates/4107.pem /etc/ssl/certs/4107.pem"
	I1101 09:05:15.911060  150525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4107.pem
	I1101 09:05:15.918251  150525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/4107.pem
	I1101 09:05:15.918303  150525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4107.pem
	I1101 09:05:15.964112  150525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4107.pem /etc/ssl/certs/51391683.0"
	I1101 09:05:15.972422  150525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:05:15.975994  150525 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:05:15.976037  150525 kubeadm.go:401] StartCluster: {Name:scheduled-stop-599041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-599041 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:05:15.976113  150525 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 09:05:15.976190  150525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:05:16.004490  150525 cri.go:89] found id: ""
	I1101 09:05:16.004553  150525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:05:16.013433  150525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:05:16.021898  150525 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:05:16.021964  150525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:05:16.030240  150525 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:05:16.030248  150525 kubeadm.go:158] found existing configuration files:
	
	I1101 09:05:16.030298  150525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:05:16.038657  150525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:05:16.038724  150525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:05:16.046547  150525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:05:16.054553  150525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:05:16.054614  150525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:05:16.062818  150525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:05:16.071055  150525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:05:16.071114  150525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:05:16.078600  150525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:05:16.086599  150525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:05:16.086667  150525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:05:16.094721  150525 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:05:16.135824  150525 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:05:16.135941  150525 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:05:16.158923  150525 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:05:16.158987  150525 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:05:16.159023  150525 kubeadm.go:319] OS: Linux
	I1101 09:05:16.159070  150525 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:05:16.159119  150525 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:05:16.159167  150525 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:05:16.159216  150525 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:05:16.159266  150525 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:05:16.159315  150525 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:05:16.159361  150525 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:05:16.159421  150525 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:05:16.159471  150525 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:05:16.229618  150525 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:05:16.229743  150525 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:05:16.229919  150525 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:05:16.235590  150525 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:05:16.241422  150525 out.go:252]   - Generating certificates and keys ...
	I1101 09:05:16.241561  150525 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:05:16.241636  150525 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:05:16.662772  150525 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:05:16.738312  150525 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:05:17.230766  150525 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:05:17.660732  150525 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:05:18.438847  150525 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:05:18.439134  150525 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-599041] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:05:18.944837  150525 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:05:18.945371  150525 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-599041] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:05:19.309344  150525 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:05:19.997441  150525 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:05:20.597177  150525 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:05:20.597490  150525 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:05:20.737984  150525 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:05:21.175021  150525 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:05:21.859631  150525 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:05:22.502694  150525 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:05:23.363292  150525 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:05:23.364080  150525 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:05:23.366823  150525 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:05:23.370326  150525 out.go:252]   - Booting up control plane ...
	I1101 09:05:23.370431  150525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:05:23.370511  150525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:05:23.370590  150525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:05:23.403428  150525 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:05:23.403534  150525 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:05:23.411463  150525 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:05:23.411971  150525 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:05:23.412177  150525 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:05:23.540975  150525 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:05:23.541092  150525 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:05:25.042890  150525 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502180396s
	I1101 09:05:25.047290  150525 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:05:25.047403  150525 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 09:05:25.047502  150525 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:05:25.047591  150525 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:05:29.218974  150525 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.171819339s
	I1101 09:05:30.709511  150525 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.662647809s
	I1101 09:05:32.048443  150525 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001434313s
	I1101 09:05:32.072586  150525 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:05:32.096469  150525 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:05:32.126358  150525 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:05:32.126563  150525 kubeadm.go:319] [mark-control-plane] Marking the node scheduled-stop-599041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:05:32.140442  150525 kubeadm.go:319] [bootstrap-token] Using token: 5obfx4.nkvkc2gyi15s7edh
	I1101 09:05:32.143315  150525 out.go:252]   - Configuring RBAC rules ...
	I1101 09:05:32.143447  150525 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:05:32.149029  150525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:05:32.157334  150525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:05:32.166450  150525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:05:32.172922  150525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:05:32.177326  150525 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:05:32.457138  150525 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:05:32.882068  150525 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:05:33.455462  150525 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:05:33.456433  150525 kubeadm.go:319] 
	I1101 09:05:33.456515  150525 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:05:33.456520  150525 kubeadm.go:319] 
	I1101 09:05:33.456599  150525 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:05:33.456603  150525 kubeadm.go:319] 
	I1101 09:05:33.456628  150525 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:05:33.456689  150525 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:05:33.456741  150525 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:05:33.456744  150525 kubeadm.go:319] 
	I1101 09:05:33.456800  150525 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:05:33.456803  150525 kubeadm.go:319] 
	I1101 09:05:33.456858  150525 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:05:33.456881  150525 kubeadm.go:319] 
	I1101 09:05:33.456935  150525 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:05:33.457011  150525 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:05:33.457082  150525 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:05:33.457085  150525 kubeadm.go:319] 
	I1101 09:05:33.457180  150525 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:05:33.457271  150525 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:05:33.457275  150525 kubeadm.go:319] 
	I1101 09:05:33.457361  150525 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5obfx4.nkvkc2gyi15s7edh \
	I1101 09:05:33.457467  150525 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e6f9f2b3de173f1f4e906e50c50de3f7183de6384d3ec0b8a8e2be0c3eae33b \
	I1101 09:05:33.457487  150525 kubeadm.go:319] 	--control-plane 
	I1101 09:05:33.457490  150525 kubeadm.go:319] 
	I1101 09:05:33.457578  150525 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:05:33.457581  150525 kubeadm.go:319] 
	I1101 09:05:33.457665  150525 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5obfx4.nkvkc2gyi15s7edh \
	I1101 09:05:33.457798  150525 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e6f9f2b3de173f1f4e906e50c50de3f7183de6384d3ec0b8a8e2be0c3eae33b 
	I1101 09:05:33.462457  150525 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:05:33.462698  150525 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:05:33.462810  150525 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:05:33.462826  150525 cni.go:84] Creating CNI manager for ""
	I1101 09:05:33.462833  150525 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 09:05:33.467781  150525 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:05:33.470616  150525 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:05:33.474565  150525 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:05:33.474576  150525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:05:33.488979  150525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:05:33.824883  150525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:05:33.825036  150525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:05:33.825109  150525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-599041 minikube.k8s.io/updated_at=2025_11_01T09_05_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=scheduled-stop-599041 minikube.k8s.io/primary=true
	I1101 09:05:34.029341  150525 kubeadm.go:1114] duration metric: took 204.36283ms to wait for elevateKubeSystemPrivileges
	I1101 09:05:34.029367  150525 ops.go:34] apiserver oom_adj: -16
	I1101 09:05:34.045242  150525 kubeadm.go:403] duration metric: took 18.06920126s to StartCluster
	I1101 09:05:34.045266  150525 settings.go:142] acquiring lock: {Name:mkb61beb9c55121316e3b119291d0716c14c3a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:34.045330  150525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2307/kubeconfig
	I1101 09:05:34.046053  150525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/kubeconfig: {Name:mk1f500f846ffda8ad893dd2bff7271191c5c640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:05:34.046276  150525 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 09:05:34.046381  150525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:05:34.046632  150525 config.go:182] Loaded profile config "scheduled-stop-599041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 09:05:34.046677  150525 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:05:34.046742  150525 addons.go:70] Setting storage-provisioner=true in profile "scheduled-stop-599041"
	I1101 09:05:34.046755  150525 addons.go:239] Setting addon storage-provisioner=true in "scheduled-stop-599041"
	I1101 09:05:34.046774  150525 host.go:66] Checking if "scheduled-stop-599041" exists ...
	I1101 09:05:34.047269  150525 cli_runner.go:164] Run: docker container inspect scheduled-stop-599041 --format={{.State.Status}}
	I1101 09:05:34.047670  150525 addons.go:70] Setting default-storageclass=true in profile "scheduled-stop-599041"
	I1101 09:05:34.047688  150525 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-599041"
	I1101 09:05:34.048021  150525 cli_runner.go:164] Run: docker container inspect scheduled-stop-599041 --format={{.State.Status}}
	I1101 09:05:34.049623  150525 out.go:179] * Verifying Kubernetes components...
	I1101 09:05:34.053964  150525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:05:34.092690  150525 addons.go:239] Setting addon default-storageclass=true in "scheduled-stop-599041"
	I1101 09:05:34.092718  150525 host.go:66] Checking if "scheduled-stop-599041" exists ...
	I1101 09:05:34.093128  150525 cli_runner.go:164] Run: docker container inspect scheduled-stop-599041 --format={{.State.Status}}
	I1101 09:05:34.096417  150525 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:05:34.099319  150525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:05:34.099331  150525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:05:34.099411  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:34.132785  150525 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:05:34.132798  150525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:05:34.132863  150525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-599041
	I1101 09:05:34.160334  150525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa Username:docker}
	I1101 09:05:34.179238  150525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/scheduled-stop-599041/id_rsa Username:docker}
	I1101 09:05:34.280480  150525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:05:34.335719  150525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:05:34.394195  150525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:05:34.406858  150525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:05:34.630316  150525 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 09:05:34.632039  150525 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:05:34.632088  150525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:05:34.888367  150525 api_server.go:72] duration metric: took 842.06647ms to wait for apiserver process to appear ...
	I1101 09:05:34.888378  150525 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:05:34.888394  150525 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:05:34.891326  150525 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 09:05:34.894415  150525 addons.go:515] duration metric: took 847.713988ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 09:05:34.900279  150525 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:05:34.901967  150525 api_server.go:141] control plane version: v1.34.1
	I1101 09:05:34.901983  150525 api_server.go:131] duration metric: took 13.59965ms to wait for apiserver health ...
	I1101 09:05:34.902003  150525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:05:34.905167  150525 system_pods.go:59] 5 kube-system pods found
	I1101 09:05:34.905187  150525 system_pods.go:61] "etcd-scheduled-stop-599041" [df3d1822-dea0-4741-8999-bd3911262f6a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:05:34.905195  150525 system_pods.go:61] "kube-apiserver-scheduled-stop-599041" [dea0d936-87c5-42c7-b69b-024bf9be659d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:05:34.905201  150525 system_pods.go:61] "kube-controller-manager-scheduled-stop-599041" [03160410-5e30-4035-8cc2-46591d93ec33] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:05:34.905208  150525 system_pods.go:61] "kube-scheduler-scheduled-stop-599041" [b05c1749-b943-4824-8827-0da270b4a514] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:05:34.905212  150525 system_pods.go:61] "storage-provisioner" [046043f5-3b02-4b43-8874-284f01dcc398] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:05:34.905218  150525 system_pods.go:74] duration metric: took 3.210013ms to wait for pod list to return data ...
	I1101 09:05:34.905228  150525 kubeadm.go:587] duration metric: took 858.931526ms to wait for: map[apiserver:true system_pods:true]
	I1101 09:05:34.905241  150525 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:05:34.908007  150525 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:05:34.908025  150525 node_conditions.go:123] node cpu capacity is 2
	I1101 09:05:34.908039  150525 node_conditions.go:105] duration metric: took 2.790946ms to run NodePressure ...
	I1101 09:05:34.908051  150525 start.go:242] waiting for startup goroutines ...
	I1101 09:05:35.134834  150525 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-599041" context rescaled to 1 replicas
	I1101 09:05:35.134863  150525 start.go:247] waiting for cluster config update ...
	I1101 09:05:35.134875  150525 start.go:256] writing updated cluster config ...
	I1101 09:05:35.135190  150525 ssh_runner.go:195] Run: rm -f paused
	I1101 09:05:35.205581  150525 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:05:35.209195  150525 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-599041" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	6eef76a9f5230       b5f57ec6b9867       11 seconds ago      Running             kube-scheduler            0                   9b05b51c59fce       kube-scheduler-scheduled-stop-599041            kube-system
	fd502af6b3406       7eb2c6ff0c5a7       11 seconds ago      Running             kube-controller-manager   0                   49087a6407c64       kube-controller-manager-scheduled-stop-599041   kube-system
	ae43d133e1cd3       43911e833d64d       11 seconds ago      Running             kube-apiserver            0                   f7da27c994162       kube-apiserver-scheduled-stop-599041            kube-system
	2595eda52acf6       a1894772a478e       11 seconds ago      Running             etcd                      0                   8536336348a5a       etcd-scheduled-stop-599041                      kube-system
	
	
	==> containerd <==
	Nov 01 09:05:11 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:11.755934785Z" level=info msg="containerd successfully booted in 0.090085s"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.149916799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-599041,Uid:88019947b935004d6bf205a7a549be36,Namespace:kube-system,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.153667693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-599041,Uid:449597b5207707a77f08f5708640cfbc,Namespace:kube-system,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.164484653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-599041,Uid:c825b846f35a1ae5d6628cea7dc686d2,Namespace:kube-system,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.169737664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-599041,Uid:1247b3c5987decb90966f535d48db508,Namespace:kube-system,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.311261391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-599041,Uid:88019947b935004d6bf205a7a549be36,Namespace:kube-system,Attempt:0,} returns sandbox id \"8536336348a5ab754e7c01132943f2acd93c042a9310df30615a2fe742b6b450\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.321605778Z" level=info msg="CreateContainer within sandbox \"8536336348a5ab754e7c01132943f2acd93c042a9310df30615a2fe742b6b450\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.329489390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-599041,Uid:449597b5207707a77f08f5708640cfbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7da27c994162d03cd1ed27b6a973a40b64cc3186b0c775a36f7ceff4628fa9c\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.343340097Z" level=info msg="CreateContainer within sandbox \"f7da27c994162d03cd1ed27b6a973a40b64cc3186b0c775a36f7ceff4628fa9c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.377471115Z" level=info msg="CreateContainer within sandbox \"8536336348a5ab754e7c01132943f2acd93c042a9310df30615a2fe742b6b450\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"2595eda52acf6fda108c9a6599cf9e0e5e1d769bc8738a456bf000c1641ab539\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.378349953Z" level=info msg="StartContainer for \"2595eda52acf6fda108c9a6599cf9e0e5e1d769bc8738a456bf000c1641ab539\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.381549275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-599041,Uid:c825b846f35a1ae5d6628cea7dc686d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"49087a6407c64f0f3d20106bcfa6325d700df347064f8c3c530251d267beacbc\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.381805172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-599041,Uid:1247b3c5987decb90966f535d48db508,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b05b51c59fce5beb7101d3d857c8dc991b00fefb57697697caf2cbce5bad94b\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.391112014Z" level=info msg="CreateContainer within sandbox \"49087a6407c64f0f3d20106bcfa6325d700df347064f8c3c530251d267beacbc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.397677322Z" level=info msg="CreateContainer within sandbox \"f7da27c994162d03cd1ed27b6a973a40b64cc3186b0c775a36f7ceff4628fa9c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ae43d133e1cd3149dafd04b5318383a95b5024508647d37a3228a80d4bead3ca\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.398219935Z" level=info msg="CreateContainer within sandbox \"9b05b51c59fce5beb7101d3d857c8dc991b00fefb57697697caf2cbce5bad94b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.399201150Z" level=info msg="StartContainer for \"ae43d133e1cd3149dafd04b5318383a95b5024508647d37a3228a80d4bead3ca\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.420376602Z" level=info msg="CreateContainer within sandbox \"49087a6407c64f0f3d20106bcfa6325d700df347064f8c3c530251d267beacbc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd502af6b34069b20b2f00a06001cf95d4761e59bdea5999510671d4ba8287a1\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.421136391Z" level=info msg="StartContainer for \"fd502af6b34069b20b2f00a06001cf95d4761e59bdea5999510671d4ba8287a1\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.425803253Z" level=info msg="CreateContainer within sandbox \"9b05b51c59fce5beb7101d3d857c8dc991b00fefb57697697caf2cbce5bad94b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6eef76a9f5230755a79cfcbcfecd0d39220acc190b50d5b2f92976e8762ff4b2\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.426471151Z" level=info msg="StartContainer for \"6eef76a9f5230755a79cfcbcfecd0d39220acc190b50d5b2f92976e8762ff4b2\""
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.501804100Z" level=info msg="StartContainer for \"2595eda52acf6fda108c9a6599cf9e0e5e1d769bc8738a456bf000c1641ab539\" returns successfully"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.502617043Z" level=info msg="StartContainer for \"ae43d133e1cd3149dafd04b5318383a95b5024508647d37a3228a80d4bead3ca\" returns successfully"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.643211009Z" level=info msg="StartContainer for \"6eef76a9f5230755a79cfcbcfecd0d39220acc190b50d5b2f92976e8762ff4b2\" returns successfully"
	Nov 01 09:05:25 scheduled-stop-599041 containerd[762]: time="2025-11-01T09:05:25.654447216Z" level=info msg="StartContainer for \"fd502af6b34069b20b2f00a06001cf95d4761e59bdea5999510671d4ba8287a1\" returns successfully"
	
	
	==> describe nodes <==
	Name:               scheduled-stop-599041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-599041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=scheduled-stop-599041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:05:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-599041
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:05:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:05:33 +0000   Sat, 01 Nov 2025 09:05:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:05:33 +0000   Sat, 01 Nov 2025 09:05:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:05:33 +0000   Sat, 01 Nov 2025 09:05:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:05:33 +0000   Sat, 01 Nov 2025 09:05:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-599041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dc7fc40c-a18f-4e46-b20b-dd134b64d61a
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-599041                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-599041             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-599041    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-599041             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node scheduled-stop-599041 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node scheduled-stop-599041 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x7 over 13s)  kubelet          Node scheduled-stop-599041 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s                 kubelet          Node scheduled-stop-599041 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet          Node scheduled-stop-599041 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet          Node scheduled-stop-599041 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           0s                 node-controller  Node scheduled-stop-599041 event: Registered Node scheduled-stop-599041 in Controller
	
	
	==> dmesg <==
	[Nov 1 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014572] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501039] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033197] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753566] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.779214] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 1 09:03] hrtimer: interrupt took 8309137 ns
	
	
	==> etcd [2595eda52acf6fda108c9a6599cf9e0e5e1d769bc8738a456bf000c1641ab539] <==
	{"level":"warn","ts":"2025-11-01T09:05:28.504288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.521870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.545853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.559027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.579739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.599730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.620283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.638951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.653341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.695916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.718079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.755375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.769939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.793778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.816868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.827608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.846556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.862542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.882481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.896748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.919531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.951082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:28.982567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:29.001206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:05:29.162329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50182","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:05:37 up 48 min,  0 user,  load average: 1.60, 1.98, 2.27
	Linux scheduled-stop-599041 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [ae43d133e1cd3149dafd04b5318383a95b5024508647d37a3228a80d4bead3ca] <==
	I1101 09:05:30.245297       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:05:30.267405       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:05:30.275869       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:05:30.275909       1 policy_source.go:240] refreshing policies
	E1101 09:05:30.304146       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1101 09:05:30.309116       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 09:05:30.349351       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:05:30.362819       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:05:30.364347       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:05:30.386152       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:05:30.390566       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:05:30.521793       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:05:30.918270       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:05:30.926534       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:05:30.926727       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:05:31.673066       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:05:31.729817       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:05:31.826901       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:05:31.837534       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 09:05:31.838830       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:05:31.843911       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:05:31.965949       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:05:32.866534       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:05:32.880088       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:05:32.892418       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [fd502af6b34069b20b2f00a06001cf95d4761e59bdea5999510671d4ba8287a1] <==
	I1101 09:05:37.021796       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:05:37.021999       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:05:37.022082       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:05:37.022131       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:05:37.022167       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:05:37.022647       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:05:37.022790       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:05:37.022943       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:05:37.024122       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:05:37.024258       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:05:37.030213       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:05:37.048403       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-599041" podCIDRs=["10.244.0.0/24"]
	I1101 09:05:37.049935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:05:37.058585       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:05:37.061940       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:05:37.061958       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:05:37.061965       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:05:37.063343       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:05:37.063408       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:05:37.063502       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:05:37.063561       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="scheduled-stop-599041"
	I1101 09:05:37.063601       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:05:37.064180       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:05:37.064468       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:05:37.064486       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-scheduler [6eef76a9f5230755a79cfcbcfecd0d39220acc190b50d5b2f92976e8762ff4b2] <==
	I1101 09:05:30.686380       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:05:30.689451       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:05:30.689539       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:05:30.690631       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:05:30.689563       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:05:30.694341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:05:30.696877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:05:30.697040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:05:30.697135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:05:30.697249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:05:30.697342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:05:30.697427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:05:30.697503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:05:30.697608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:05:30.702323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:05:30.706114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:05:30.706332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:05:30.706562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:05:30.706721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:05:30.706820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:05:30.706904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:05:30.706985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:05:30.707078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:05:30.707157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 09:05:32.291602       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304112    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1247b3c5987decb90966f535d48db508-kubeconfig\") pod \"kube-scheduler-scheduled-stop-599041\" (UID: \"1247b3c5987decb90966f535d48db508\") " pod="kube-system/kube-scheduler-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304130    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c825b846f35a1ae5d6628cea7dc686d2-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-599041\" (UID: \"c825b846f35a1ae5d6628cea7dc686d2\") " pod="kube-system/kube-controller-manager-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304149    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c825b846f35a1ae5d6628cea7dc686d2-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-599041\" (UID: \"c825b846f35a1ae5d6628cea7dc686d2\") " pod="kube-system/kube-controller-manager-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304167    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c825b846f35a1ae5d6628cea7dc686d2-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-599041\" (UID: \"c825b846f35a1ae5d6628cea7dc686d2\") " pod="kube-system/kube-controller-manager-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304185    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/88019947b935004d6bf205a7a549be36-etcd-data\") pod \"etcd-scheduled-stop-599041\" (UID: \"88019947b935004d6bf205a7a549be36\") " pod="kube-system/etcd-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304202    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/449597b5207707a77f08f5708640cfbc-ca-certs\") pod \"kube-apiserver-scheduled-stop-599041\" (UID: \"449597b5207707a77f08f5708640cfbc\") " pod="kube-system/kube-apiserver-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304220    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/449597b5207707a77f08f5708640cfbc-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-599041\" (UID: \"449597b5207707a77f08f5708640cfbc\") " pod="kube-system/kube-apiserver-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304236    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c825b846f35a1ae5d6628cea7dc686d2-ca-certs\") pod \"kube-controller-manager-scheduled-stop-599041\" (UID: \"c825b846f35a1ae5d6628cea7dc686d2\") " pod="kube-system/kube-controller-manager-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304255    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c825b846f35a1ae5d6628cea7dc686d2-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-599041\" (UID: \"c825b846f35a1ae5d6628cea7dc686d2\") " pod="kube-system/kube-controller-manager-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304278    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/449597b5207707a77f08f5708640cfbc-k8s-certs\") pod \"kube-apiserver-scheduled-stop-599041\" (UID: \"449597b5207707a77f08f5708640cfbc\") " pod="kube-system/kube-apiserver-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.304303    1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/449597b5207707a77f08f5708640cfbc-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-599041\" (UID: \"449597b5207707a77f08f5708640cfbc\") " pod="kube-system/kube-apiserver-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.770921    1506 apiserver.go:52] "Watching apiserver"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.802992    1506 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.826093    1506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-599041" podStartSLOduration=0.826072133 podStartE2EDuration="826.072133ms" podCreationTimestamp="2025-11-01 09:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:05:33.806186416 +0000 UTC m=+1.112336859" watchObservedRunningTime="2025-11-01 09:05:33.826072133 +0000 UTC m=+1.132222568"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.861345    1506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-599041" podStartSLOduration=0.861326436 podStartE2EDuration="861.326436ms" podCreationTimestamp="2025-11-01 09:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:05:33.826471728 +0000 UTC m=+1.132622335" watchObservedRunningTime="2025-11-01 09:05:33.861326436 +0000 UTC m=+1.167476871"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.874587    1506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-599041" podStartSLOduration=0.874568568 podStartE2EDuration="874.568568ms" podCreationTimestamp="2025-11-01 09:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:05:33.862132666 +0000 UTC m=+1.168283109" watchObservedRunningTime="2025-11-01 09:05:33.874568568 +0000 UTC m=+1.180718994"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.889070    1506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-599041" podStartSLOduration=1.889052054 podStartE2EDuration="1.889052054s" podCreationTimestamp="2025-11-01 09:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:05:33.874949734 +0000 UTC m=+1.181100169" watchObservedRunningTime="2025-11-01 09:05:33.889052054 +0000 UTC m=+1.195202489"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.893324    1506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.896952    1506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: I1101 09:05:33.901934    1506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: E1101 09:05:33.906560    1506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-599041\" already exists" pod="kube-system/etcd-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: E1101 09:05:33.915115    1506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-599041\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-599041"
	Nov 01 09:05:33 scheduled-stop-599041 kubelet[1506]: E1101 09:05:33.915724    1506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-scheduled-stop-599041\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-599041"
	Nov 01 09:05:37 scheduled-stop-599041 kubelet[1506]: I1101 09:05:37.145990    1506 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:05:37 scheduled-stop-599041 kubelet[1506]: I1101 09:05:37.146790    1506 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-599041 -n scheduled-stop-599041
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-599041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-599041 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-599041 describe pod storage-provisioner: exit status 1 (166.37193ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-599041 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-599041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-599041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-599041: (2.227530369s)
--- FAIL: TestScheduledStopUnix (41.62s)

                                                
                                    

Test pass (301/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.09
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.25
18 TestDownloadOnly/v1.34.1/DeleteAll 0.28
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 161.27
29 TestAddons/serial/Volcano 41.89
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.96
35 TestAddons/parallel/Registry 17.18
36 TestAddons/parallel/RegistryCreds 0.96
37 TestAddons/parallel/Ingress 20.49
38 TestAddons/parallel/InspektorGadget 6.29
39 TestAddons/parallel/MetricsServer 6.05
41 TestAddons/parallel/CSI 37.77
42 TestAddons/parallel/Headlamp 17.58
43 TestAddons/parallel/CloudSpanner 6.66
44 TestAddons/parallel/LocalPath 51.89
45 TestAddons/parallel/NvidiaDevicePlugin 6.01
46 TestAddons/parallel/Yakd 11.87
48 TestAddons/StoppedEnableDisable 12.4
49 TestCertOptions 38.62
50 TestCertExpiration 237
52 TestForceSystemdFlag 36.51
53 TestForceSystemdEnv 48.63
54 TestDockerEnvContainerd 49.32
58 TestErrorSpam/setup 33.01
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 1.71
62 TestErrorSpam/unpause 1.95
63 TestErrorSpam/stop 1.63
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 78.78
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.65
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
75 TestFunctional/serial/CacheCmd/cache/add_local 1.26
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 49.18
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.51
86 TestFunctional/serial/LogsFileCmd 1.44
87 TestFunctional/serial/InvalidService 4.88
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 7.41
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.23
97 TestFunctional/parallel/ServiceCmdConnect 9.65
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 25
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.36
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.29
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.97
113 TestFunctional/parallel/License 0.35
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.48
128 TestFunctional/parallel/ServiceCmd/List 0.62
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.7
131 TestFunctional/parallel/MountCmd/any-port 9.24
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
133 TestFunctional/parallel/ServiceCmd/Format 0.44
134 TestFunctional/parallel/ServiceCmd/URL 0.44
135 TestFunctional/parallel/MountCmd/specific-port 2.15
136 TestFunctional/parallel/Version/short 0.09
137 TestFunctional/parallel/Version/components 1.34
138 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
144 TestFunctional/parallel/ImageCommands/Setup 0.66
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.3
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 184.25
163 TestMultiControlPlane/serial/DeployApp 7.51
164 TestMultiControlPlane/serial/PingHostFromPods 1.67
165 TestMultiControlPlane/serial/AddWorkerNode 61.82
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.51
169 TestMultiControlPlane/serial/StopSecondaryNode 12.96
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 15.08
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.38
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.63
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.88
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
176 TestMultiControlPlane/serial/StopCluster 36.35
177 TestMultiControlPlane/serial/RestartCluster 60.4
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 51.56
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.12
185 TestJSONOutput/start/Command 83.92
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.75
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.97
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 38.27
211 TestKicCustomNetwork/use_default_bridge_network 38.7
212 TestKicExistingNetwork 38.3
213 TestKicCustomSubnet 37.21
214 TestKicStaticIP 36.88
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 71.55
219 TestMountStart/serial/StartWithMountFirst 8.87
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 9.21
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.04
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 134.39
231 TestMultiNode/serial/DeployApp2Nodes 5.67
232 TestMultiNode/serial/PingHostFrom2Pods 1.08
233 TestMultiNode/serial/AddNode 29.42
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.75
236 TestMultiNode/serial/CopyFile 10.25
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 7.93
239 TestMultiNode/serial/RestartKeepsNodes 74.56
240 TestMultiNode/serial/DeleteNode 5.7
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 54.56
243 TestMultiNode/serial/ValidateNameConflict 37.59
248 TestPreload 121.92
253 TestInsufficientStorage 12.63
254 TestRunningBinaryUpgrade 72.95
256 TestKubernetesUpgrade 102.75
257 TestMissingContainerUpgrade 141.51
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 38.64
261 TestNoKubernetes/serial/StartWithStopK8s 25.8
262 TestNoKubernetes/serial/Start 8.14
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 0.68
265 TestNoKubernetes/serial/Stop 1.28
266 TestNoKubernetes/serial/StartNoArgs 6.65
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
268 TestStoppedBinaryUpgrade/Setup 0.7
269 TestStoppedBinaryUpgrade/Upgrade 64.21
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.93
279 TestPause/serial/Start 85.19
280 TestPause/serial/SecondStartNoReconfiguration 7.94
288 TestNetworkPlugins/group/false 4.99
289 TestPause/serial/Pause 0.84
290 TestPause/serial/VerifyStatus 0.4
291 TestPause/serial/Unpause 0.82
292 TestPause/serial/PauseAgain 1.08
293 TestPause/serial/DeletePaused 3.45
297 TestPause/serial/VerifyDeletedResources 0.18
299 TestStartStop/group/old-k8s-version/serial/FirstStart 64.36
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.32
302 TestStartStop/group/old-k8s-version/serial/Stop 12.1
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/old-k8s-version/serial/SecondStart 55.51
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
308 TestStartStop/group/no-preload/serial/FirstStart 69.32
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
310 TestStartStop/group/old-k8s-version/serial/Pause 3.8
312 TestStartStop/group/embed-certs/serial/FirstStart 93.11
313 TestStartStop/group/no-preload/serial/DeployApp 8.36
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
315 TestStartStop/group/no-preload/serial/Stop 12.2
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 53.99
318 TestStartStop/group/embed-certs/serial/DeployApp 9.5
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.78
320 TestStartStop/group/embed-certs/serial/Stop 12.73
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
322 TestStartStop/group/embed-certs/serial/SecondStart 50.03
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
326 TestStartStop/group/no-preload/serial/Pause 3.15
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.79
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
332 TestStartStop/group/embed-certs/serial/Pause 3.93
334 TestStartStop/group/newest-cni/serial/FirstStart 39.9
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.5
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.49
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
340 TestStartStop/group/newest-cni/serial/Stop 1.34
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
342 TestStartStop/group/newest-cni/serial/SecondStart 21.36
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.22
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
348 TestStartStop/group/newest-cni/serial/Pause 4.48
349 TestNetworkPlugins/group/auto/Start 83.73
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
354 TestNetworkPlugins/group/kindnet/Start 84.34
355 TestNetworkPlugins/group/auto/KubeletFlags 0.32
356 TestNetworkPlugins/group/auto/NetCatPod 10.34
357 TestNetworkPlugins/group/auto/DNS 0.18
358 TestNetworkPlugins/group/auto/Localhost 0.17
359 TestNetworkPlugins/group/auto/HairPin 0.15
360 TestNetworkPlugins/group/calico/Start 64.52
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
363 TestNetworkPlugins/group/kindnet/NetCatPod 10.43
364 TestNetworkPlugins/group/kindnet/DNS 0.25
365 TestNetworkPlugins/group/kindnet/Localhost 0.24
366 TestNetworkPlugins/group/kindnet/HairPin 0.22
367 TestNetworkPlugins/group/custom-flannel/Start 73.69
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.54
370 TestNetworkPlugins/group/calico/NetCatPod 9.46
371 TestNetworkPlugins/group/calico/DNS 0.21
372 TestNetworkPlugins/group/calico/Localhost 0.19
373 TestNetworkPlugins/group/calico/HairPin 0.2
374 TestNetworkPlugins/group/enable-default-cni/Start 78.38
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.51
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.45
377 TestNetworkPlugins/group/custom-flannel/DNS 0.38
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
380 TestNetworkPlugins/group/flannel/Start 60.77
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
386 TestNetworkPlugins/group/bridge/Start 88.68
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
389 TestNetworkPlugins/group/flannel/NetCatPod 10.34
390 TestNetworkPlugins/group/flannel/DNS 0.17
391 TestNetworkPlugins/group/flannel/Localhost 0.23
392 TestNetworkPlugins/group/flannel/HairPin 0.22
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
394 TestNetworkPlugins/group/bridge/NetCatPod 9.26
395 TestNetworkPlugins/group/bridge/DNS 0.18
396 TestNetworkPlugins/group/bridge/Localhost 0.15
397 TestNetworkPlugins/group/bridge/HairPin 0.21
x
+
TestDownloadOnly/v1.28.0/json-events (5.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-318807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-318807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.863125239s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 08:29:30.818158    4107 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1101 08:29:30.818239    4107 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-318807
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-318807: exit status 85 (94.708476ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-318807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-318807 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:24.994396    4113 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:24.994578    4113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:24.994600    4113 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:24.994619    4113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:24.995022    4113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	W1101 08:29:24.995231    4113 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21835-2307/.minikube/config/config.json: open /home/jenkins/minikube-integration/21835-2307/.minikube/config/config.json: no such file or directory
	I1101 08:29:24.996284    4113 out.go:368] Setting JSON to true
	I1101 08:29:24.997195    4113 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":714,"bootTime":1761985051,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 08:29:24.997347    4113 start.go:143] virtualization:  
	I1101 08:29:25.001740    4113 out.go:99] [download-only-318807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:29:25.002001    4113 notify.go:221] Checking for updates...
	W1101 08:29:25.001959    4113 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 08:29:25.004942    4113 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:25.007978    4113 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:25.010961    4113 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	I1101 08:29:25.013993    4113 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	I1101 08:29:25.017034    4113 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 08:29:25.022820    4113 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:29:25.023126    4113 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:25.044868    4113 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:29:25.044973    4113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:25.456066    4113 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 08:29:25.441955196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:25.456177    4113 docker.go:319] overlay module found
	I1101 08:29:25.459216    4113 out.go:99] Using the docker driver based on user configuration
	I1101 08:29:25.459251    4113 start.go:309] selected driver: docker
	I1101 08:29:25.459259    4113 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:25.459354    4113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:25.528966    4113 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 08:29:25.519448964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:25.529153    4113 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:25.529437    4113 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 08:29:25.529608    4113 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:29:25.532760    4113 out.go:171] Using Docker driver with root privileges
	I1101 08:29:25.535632    4113 cni.go:84] Creating CNI manager for ""
	I1101 08:29:25.535705    4113 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 08:29:25.535719    4113 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:25.535809    4113 start.go:353] cluster config:
	{Name:download-only-318807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-318807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:29:25.538734    4113 out.go:99] Starting "download-only-318807" primary control-plane node in "download-only-318807" cluster
	I1101 08:29:25.538759    4113 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1101 08:29:25.541705    4113 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:29:25.541733    4113 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1101 08:29:25.541834    4113 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:29:25.557556    4113 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:25.557806    4113 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:29:25.557913    4113 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:25.600621    4113 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1101 08:29:25.600660    4113 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:25.600816    4113 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1101 08:29:25.604117    4113 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 08:29:25.604145    4113 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1101 08:29:25.686792    4113 preload.go:290] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1101 08:29:25.686956    4113 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1101 08:29:29.634165    4113 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1101 08:29:29.634519    4113 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/download-only-318807/config.json ...
	I1101 08:29:29.634553    4113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/download-only-318807/config.json: {Name:mk8aa74ac416defbece73c49b02ac8ab71154564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:29.634714    4113 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1101 08:29:29.634894    4113 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21835-2307/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-318807 host does not exist
	  To start a cluster, run: "minikube start -p download-only-318807"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-318807
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-637559 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-637559 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.085111719s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 08:29:35.370080    4107 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1101 08:29:35.370114    4107 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-637559
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-637559: exit status 85 (249.225522ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-318807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-318807 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-318807                                                                                                                                                               │ download-only-318807 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-637559 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-637559 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:31.332129    4316 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:31.332236    4316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:31.332246    4316 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:31.332251    4316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:31.332480    4316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 08:29:31.332869    4316 out.go:368] Setting JSON to true
	I1101 08:29:31.333532    4316 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":721,"bootTime":1761985051,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 08:29:31.333601    4316 start.go:143] virtualization:  
	I1101 08:29:31.336901    4316 out.go:99] [download-only-637559] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:29:31.337057    4316 notify.go:221] Checking for updates...
	I1101 08:29:31.340121    4316 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:31.343212    4316 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:31.346200    4316 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	I1101 08:29:31.349238    4316 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	I1101 08:29:31.352065    4316 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 08:29:31.357817    4316 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:29:31.358108    4316 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:31.392838    4316 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:29:31.392960    4316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:31.456166    4316 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-01 08:29:31.447037683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:31.456268    4316 docker.go:319] overlay module found
	I1101 08:29:31.459206    4316 out.go:99] Using the docker driver based on user configuration
	I1101 08:29:31.459253    4316 start.go:309] selected driver: docker
	I1101 08:29:31.459260    4316 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:31.459366    4316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:31.522146    4316 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-01 08:29:31.513482435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:31.522292    4316 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:31.522580    4316 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 08:29:31.522731    4316 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:29:31.525904    4316 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-637559 host does not exist
	  To start a cluster, run: "minikube start -p download-only-637559"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-637559
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 08:29:37.227689    4107 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-267897 --alsologtostderr --binary-mirror http://127.0.0.1:41397 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-267897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-267897
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-775283
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-775283: exit status 85 (74.455272ms)

                                                
                                                
-- stdout --
	* Profile "addons-775283" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775283"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-775283
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-775283: exit status 85 (71.798852ms)

                                                
                                                
-- stdout --
	* Profile "addons-775283" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775283"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (161.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-775283 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-775283 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.272408525s)
--- PASS: TestAddons/Setup (161.27s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.89s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 70.591171ms
addons_test.go:884: volcano-controller stabilized in 71.184797ms
addons_test.go:876: volcano-admission stabilized in 71.254828ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-x6sgk" [1d426aee-8566-449c-8828-1d7b3fb042b8] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003399033s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-4hh6j" [63541679-9d7c-43e1-b9dd-0a1509331636] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003332691s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-wlb8x" [f22bb4f7-5eb0-45ed-9065-690b7c371ed1] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004330318s
addons_test.go:903: (dbg) Run:  kubectl --context addons-775283 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-775283 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-775283 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [3570a516-af40-4f18-8752-7336a7852905] Pending
helpers_test.go:352: "test-job-nginx-0" [3570a516-af40-4f18-8752-7336a7852905] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [3570a516-af40-4f18-8752-7336a7852905] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004130494s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable volcano --alsologtostderr -v=1: (12.214116454s)
--- PASS: TestAddons/serial/Volcano (41.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-775283 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-775283 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-775283 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-775283 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a3f36d4a-e4f0-44a7-ab5e-455fc3d42d81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a3f36d4a-e4f0-44a7-ab5e-455fc3d42d81] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004210792s
addons_test.go:694: (dbg) Run:  kubectl --context addons-775283 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-775283 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-775283 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-775283 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.576671ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-44chn" [bc9482c9-547e-4113-8092-ce312558aef0] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003468916s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rzrtj" [acbda275-7517-4df1-ac43-f0041bc1ffa5] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003678556s
addons_test.go:392: (dbg) Run:  kubectl --context addons-775283 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-775283 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-775283 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.124240548s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 ip
2025/11/01 08:33:35 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.18s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.110613ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-775283
addons_test.go:332: (dbg) Run:  kubectl --context addons-775283 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-775283 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-775283 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-775283 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f289c81b-9981-44a7-ba2e-3e5287e65d2e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f289c81b-9981-44a7-ba2e-3e5287e65d2e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003682018s
I1101 08:34:48.815140    4107 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-775283 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable ingress-dns --alsologtostderr -v=1: (1.90317117s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable ingress --alsologtostderr -v=1: (7.790784536s)
--- PASS: TestAddons/parallel/Ingress (20.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-b4r74" [de189277-404b-4be8-8f5e-977fbf61f4c5] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003726863s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 48.509235ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mzrfn" [029a28a8-1f5c-4f08-9475-bc9cc48e722f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003566454s
addons_test.go:463: (dbg) Run:  kubectl --context addons-775283 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 08:34:01.420097    4107 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 08:34:01.424895    4107 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 08:34:01.424919    4107 kapi.go:107] duration metric: took 7.627441ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.637796ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-775283 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-775283 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [482e5bfd-7381-49a1-91e3-8c923930c694] Pending
helpers_test.go:352: "task-pv-pod" [482e5bfd-7381-49a1-91e3-8c923930c694] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [482e5bfd-7381-49a1-91e3-8c923930c694] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004423691s
addons_test.go:572: (dbg) Run:  kubectl --context addons-775283 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-775283 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-775283 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-775283 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-775283 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-775283 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-775283 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [fcd5af04-d094-471a-b293-502f4f87c426] Pending
helpers_test.go:352: "task-pv-pod-restore" [fcd5af04-d094-471a-b293-502f4f87c426] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [fcd5af04-d094-471a-b293-502f4f87c426] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004184423s
addons_test.go:614: (dbg) Run:  kubectl --context addons-775283 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-775283 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-775283 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.886151983s)
--- PASS: TestAddons/parallel/CSI (37.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-775283 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-775283 --alsologtostderr -v=1: (1.749724306s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-c5dfq" [c765848f-59de-4ac9-9de0-b4b6245c6f96] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-c5dfq" [c765848f-59de-4ac9-9de0-b4b6245c6f96] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-c5dfq" [c765848f-59de-4ac9-9de0-b4b6245c6f96] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003220474s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable headlamp --alsologtostderr -v=1: (5.825167534s)
--- PASS: TestAddons/parallel/Headlamp (17.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-6hcw8" [bccf94e6-cc05-49b8-be62-e4aa70e77e64] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003887586s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-775283 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-775283 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-775283 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1766a604-7fad-4c6b-a44e-764fde30e1dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1766a604-7fad-4c6b-a44e-764fde30e1dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1766a604-7fad-4c6b-a44e-764fde30e1dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004165345s
addons_test.go:967: (dbg) Run:  kubectl --context addons-775283 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 ssh "cat /opt/local-path-provisioner/pvc-b4467e23-f08b-4bc8-a88e-fbb18fc207c0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-775283 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-775283 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.325759007s)
--- PASS: TestAddons/parallel/LocalPath (51.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-j65t6" [98699e5b-0037-43fb-a0e6-91151cf32e31] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003988537s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.009291937s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6vj7z" [2f77c77d-b48b-4687-a312-c537cbeebf31] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003557212s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-775283 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-775283 addons disable yakd --alsologtostderr -v=1: (5.861497911s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-775283
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-775283: (12.130206988s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-775283
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-775283
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-775283
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestCertOptions (38.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-250535 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-250535 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.744471949s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-250535 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-250535 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-250535 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-250535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-250535
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-250535: (2.122742738s)
--- PASS: TestCertOptions (38.62s)

                                                
                                    
x
+
TestCertExpiration (237s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-153767 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-153767 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (46.743697767s)
E1101 09:12:19.197511    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-153767 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-153767 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.339909963s)
helpers_test.go:175: Cleaning up "cert-expiration-153767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-153767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-153767: (2.917599875s)
--- PASS: TestCertExpiration (237.00s)

                                                
                                    
x
+
TestForceSystemdFlag (36.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-108761 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1101 09:10:22.267492    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-108761 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.793312782s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-108761 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-108761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-108761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-108761: (2.381614752s)
--- PASS: TestForceSystemdFlag (36.51s)

                                                
                                    
x
+
TestForceSystemdEnv (48.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-525426 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-525426 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.279082146s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-525426 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-525426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-525426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-525426: (2.956509958s)
--- PASS: TestForceSystemdEnv (48.63s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.32s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-120018 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-120018 --driver=docker  --container-runtime=containerd: (33.461417189s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-120018"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-120018": (1.100455149s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TnQ2zGY02zx8/agent.24614" SSH_AGENT_PID="24615" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TnQ2zGY02zx8/agent.24614" SSH_AGENT_PID="24615" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TnQ2zGY02zx8/agent.24614" SSH_AGENT_PID="24615" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.298380492s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TnQ2zGY02zx8/agent.24614" SSH_AGENT_PID="24615" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-120018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-120018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-120018: (2.043071514s)
--- PASS: TestDockerEnvContainerd (49.32s)

                                                
                                    
x
+
TestErrorSpam/setup (33.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-623442 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-623442 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-623442 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-623442 --driver=docker  --container-runtime=containerd: (33.010955141s)
--- PASS: TestErrorSpam/setup (33.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 stop: (1.429703863s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-623442 --log_dir /tmp/nospam-623442 stop
--- PASS: TestErrorSpam/stop (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21835-2307/.minikube/files/etc/test/nested/copy/4107/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-173309 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1101 08:37:19.198678    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:19.205057    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:19.216400    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:19.237858    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:19.279244    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:19.360741    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:19.522234    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:19.843743    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:20.485038    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:21.766305    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:24.328329    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:29.449945    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:39.691595    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:38:00.176120    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-173309 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m18.782638545s)
--- PASS: TestFunctional/serial/StartWithProxy (78.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 08:38:10.482444    4107 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-173309 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-173309 --alsologtostderr -v=8: (7.648489251s)
functional_test.go:678: soft start took 7.651150394s for "functional-173309" cluster.
I1101 08:38:18.131312    4107 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-173309 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 cache add registry.k8s.io/pause:3.1: (1.311501664s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 cache add registry.k8s.io/pause:3.3: (1.078629299s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 cache add registry.k8s.io/pause:latest: (1.049695871s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-173309 /tmp/TestFunctionalserialCacheCmdcacheadd_local3457342902/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cache add minikube-local-cache-test:functional-173309
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cache delete minikube-local-cache-test:functional-173309
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-173309
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.136087ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 kubectl -- --context functional-173309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-173309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-173309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 08:38:41.138010    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-173309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.175741254s)
functional_test.go:776: restart took 49.175824701s for "functional-173309" cluster.
I1101 08:39:14.832752    4107 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (49.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-173309 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 logs: (1.506886852s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 logs --file /tmp/TestFunctionalserialLogsFileCmd2631637873/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 logs --file /tmp/TestFunctionalserialLogsFileCmd2631637873/001/logs.txt: (1.440059483s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-173309 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-173309
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-173309: exit status 115 (950.488477ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30547 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-173309 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 config get cpus: exit status 14 (59.173222ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 config get cpus: exit status 14 (62.800487ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-173309 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-173309 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 39919: os: process already finished
E1101 08:40:03.061839    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-173309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-173309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.865808ms)

                                                
                                                
-- stdout --
	* [functional-173309] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:39:55.281267   39561 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:39:55.281496   39561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:39:55.281524   39561 out.go:374] Setting ErrFile to fd 2...
	I1101 08:39:55.281543   39561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:39:55.281876   39561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 08:39:55.282316   39561 out.go:368] Setting JSON to false
	I1101 08:39:55.283306   39561 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1345,"bootTime":1761985051,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 08:39:55.283402   39561 start.go:143] virtualization:  
	I1101 08:39:55.286741   39561 out.go:179] * [functional-173309] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:39:55.289614   39561 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:39:55.289660   39561 notify.go:221] Checking for updates...
	I1101 08:39:55.292511   39561 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:39:55.295692   39561 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	I1101 08:39:55.298428   39561 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	I1101 08:39:55.301316   39561 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 08:39:55.304183   39561 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:39:55.307507   39561 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 08:39:55.308076   39561 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:39:55.343346   39561 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:39:55.343459   39561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:39:55.401890   39561 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 08:39:55.391787375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:39:55.402010   39561 docker.go:319] overlay module found
	I1101 08:39:55.405627   39561 out.go:179] * Using the docker driver based on existing profile
	I1101 08:39:55.408568   39561 start.go:309] selected driver: docker
	I1101 08:39:55.408589   39561 start.go:930] validating driver "docker" against &{Name:functional-173309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-173309 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:39:55.408695   39561 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:39:55.412188   39561 out.go:203] 
	W1101 08:39:55.415135   39561 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 08:39:55.417837   39561 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-173309 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-173309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-173309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (202.999363ms)

                                                
                                                
-- stdout --
	* [functional-173309] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:39:55.082012   39514 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:39:55.082218   39514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:39:55.082247   39514 out.go:374] Setting ErrFile to fd 2...
	I1101 08:39:55.082266   39514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:39:55.084032   39514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 08:39:55.084598   39514 out.go:368] Setting JSON to false
	I1101 08:39:55.085730   39514 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1344,"bootTime":1761985051,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 08:39:55.085904   39514 start.go:143] virtualization:  
	I1101 08:39:55.089056   39514 out.go:179] * [functional-173309] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1101 08:39:55.093152   39514 notify.go:221] Checking for updates...
	I1101 08:39:55.093118   39514 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:39:55.096990   39514 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:39:55.100174   39514 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	I1101 08:39:55.103294   39514 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	I1101 08:39:55.106256   39514 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 08:39:55.109267   39514 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:39:55.112921   39514 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 08:39:55.114276   39514 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:39:55.145606   39514 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:39:55.145775   39514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:39:55.206762   39514 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 08:39:55.197457285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:39:55.206867   39514 docker.go:319] overlay module found
	I1101 08:39:55.210117   39514 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 08:39:55.212912   39514 start.go:309] selected driver: docker
	I1101 08:39:55.212931   39514 start.go:930] validating driver "docker" against &{Name:functional-173309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-173309 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:39:55.213034   39514 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:39:55.216546   39514 out.go:203] 
	W1101 08:39:55.219255   39514 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 08:39:55.222098   39514 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-173309 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-173309 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-dkkkb" [61450f4a-04a6-402a-be4b-bbc01e2e4379] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-dkkkb" [61450f4a-04a6-402a-be4b-bbc01e2e4379] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003623654s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30876
functional_test.go:1680: http://192.168.49.2:30876: success! body:
Request served by hello-node-connect-7d85dfc575-dkkkb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30876
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8f177008-7e5b-4c47-8c4e-fda2cf11fe2e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003537603s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-173309 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-173309 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-173309 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-173309 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4771208c-2008-4944-9235-8adb1b3714ed] Pending
helpers_test.go:352: "sp-pod" [4771208c-2008-4944-9235-8adb1b3714ed] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4771208c-2008-4944-9235-8adb1b3714ed] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003911427s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-173309 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-173309 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-173309 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bd82a354-9917-4166-bf51-f239fb95cd6a] Pending
helpers_test.go:352: "sp-pod" [bd82a354-9917-4166-bf51-f239fb95cd6a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [bd82a354-9917-4166-bf51-f239fb95cd6a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003623038s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-173309 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh -n functional-173309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cp functional-173309:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2485792979/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh -n functional-173309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh -n functional-173309 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4107/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo cat /etc/test/nested/copy/4107/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4107.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo cat /etc/ssl/certs/4107.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4107.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo cat /usr/share/ca-certificates/4107.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo cat /etc/ssl/certs/41072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo cat /usr/share/ca-certificates/41072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-173309 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 ssh "sudo systemctl is-active docker": exit status 1 (300.002122ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 ssh "sudo systemctl is-active crio": exit status 1 (672.174001ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-173309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-173309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-173309 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-173309 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 36814: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-173309 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-173309 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b2cc1a3d-0300-4db8-94d7-5ad6e22763bb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b2cc1a3d-0300-4db8-94d7-5ad6e22763bb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003846441s
I1101 08:39:33.967386    4107 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-173309 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.59.132 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-173309 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-173309 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-173309 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-s4vkc" [6e66b1bf-0be9-4b31-ae68-be2dce1cc690] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-s4vkc" [6e66b1bf-0be9-4b31-ae68-be2dce1cc690] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004377817s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "396.66391ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "80.114859ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "441.523045ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "83.568282ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 service list -o json
functional_test.go:1504: Took "700.054774ms" to run "out/minikube-linux-arm64 -p functional-173309 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdany-port4018998894/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761986391903009844" to /tmp/TestFunctionalparallelMountCmdany-port4018998894/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761986391903009844" to /tmp/TestFunctionalparallelMountCmdany-port4018998894/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761986391903009844" to /tmp/TestFunctionalparallelMountCmdany-port4018998894/001/test-1761986391903009844
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (446.58933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:39:52.351080    4107 retry.go:31] will retry after 735.787872ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 08:39 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 08:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 08:39 test-1761986391903009844
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh cat /mount-9p/test-1761986391903009844
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-173309 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8acc5e0e-538c-4213-b7a1-5331d86439fa] Pending
helpers_test.go:352: "busybox-mount" [8acc5e0e-538c-4213-b7a1-5331d86439fa] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8acc5e0e-538c-4213-b7a1-5331d86439fa] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8acc5e0e-538c-4213-b7a1-5331d86439fa] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004157377s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-173309 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdany-port4018998894/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31474
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31474
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdspecific-port171421242/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (636.36905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:40:01.775442    4107 retry.go:31] will retry after 289.885936ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdspecific-port171421242/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "sudo umount -f /mount-9p"
2025/11/01 08:40:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 ssh "sudo umount -f /mount-9p": exit status 1 (364.99221ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-173309 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdspecific-port171421242/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 version -o=json --components: (1.340406978s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495898967/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495898967/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495898967/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-173309 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495898967/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495898967/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-173309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495898967/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-173309 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-173309
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-173309
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-173309 image ls --format short --alsologtostderr:
I1101 08:40:10.911723   42643 out.go:360] Setting OutFile to fd 1 ...
I1101 08:40:10.911917   42643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:10.911930   42643 out.go:374] Setting ErrFile to fd 2...
I1101 08:40:10.911936   42643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:10.912237   42643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
I1101 08:40:10.912869   42643 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:10.913033   42643 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:10.913666   42643 cli_runner.go:164] Run: docker container inspect functional-173309 --format={{.State.Status}}
I1101 08:40:10.946194   42643 ssh_runner.go:195] Run: systemctl --version
I1101 08:40:10.946242   42643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-173309
I1101 08:40:10.964611   42643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/functional-173309/id_rsa Username:docker}
I1101 08:40:11.072682   42643 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-173309 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-173309  │ sha256:820a3b │ 990B   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/kicbase/echo-server               │ functional-173309  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/nginx                     │ latest             │ sha256:46fabd │ 58.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-173309 image ls --format table --alsologtostderr:
I1101 08:40:11.271391   42727 out.go:360] Setting OutFile to fd 1 ...
I1101 08:40:11.271914   42727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:11.271947   42727 out.go:374] Setting ErrFile to fd 2...
I1101 08:40:11.271967   42727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:11.272628   42727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
I1101 08:40:11.274021   42727 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:11.274250   42727 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:11.274962   42727 cli_runner.go:164] Run: docker container inspect functional-173309 --format={{.State.Status}}
I1101 08:40:11.293669   42727 ssh_runner.go:195] Run: systemctl --version
I1101 08:40:11.293786   42727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-173309
I1101 08:40:11.314890   42727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/functional-173309/id_rsa Username:docker}
I1101 08:40:11.428798   42727 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-173309 image ls --format json --alsologtostderr:
[{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:a189
4772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/
library/nginx:latest"],"size":"58267312"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:820a3b7a5530e01b993d37b04039c85974d5194a85c8f1a7b4055bf9e895059d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-173309"],"size":"990"},{
"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-173309"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["d
ocker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-173309 image ls --format json --alsologtostderr:
I1101 08:40:11.175827   42707 out.go:360] Setting OutFile to fd 1 ...
I1101 08:40:11.176040   42707 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:11.176046   42707 out.go:374] Setting ErrFile to fd 2...
I1101 08:40:11.176051   42707 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:11.176410   42707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
I1101 08:40:11.177144   42707 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:11.177290   42707 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:11.177777   42707 cli_runner.go:164] Run: docker container inspect functional-173309 --format={{.State.Status}}
I1101 08:40:11.207244   42707 ssh_runner.go:195] Run: systemctl --version
I1101 08:40:11.207295   42707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-173309
I1101 08:40:11.227922   42707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/functional-173309/id_rsa Username:docker}
I1101 08:40:11.357673   42707 ssh_runner.go:195] Run: sudo crictl images --output json
W1101 08:40:11.389419   42707 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 66d40b69-e7e4-491d-9a3f-894677bcea68
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-173309 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-173309
size: "2173567"
- id: sha256:46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "58267312"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:820a3b7a5530e01b993d37b04039c85974d5194a85c8f1a7b4055bf9e895059d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-173309
size: "990"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-173309 image ls --format yaml --alsologtostderr:
I1101 08:40:10.939514   42644 out.go:360] Setting OutFile to fd 1 ...
I1101 08:40:10.939674   42644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:10.939685   42644 out.go:374] Setting ErrFile to fd 2...
I1101 08:40:10.939690   42644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:10.939955   42644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
I1101 08:40:10.940562   42644 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:10.940723   42644 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:10.941272   42644 cli_runner.go:164] Run: docker container inspect functional-173309 --format={{.State.Status}}
I1101 08:40:10.961907   42644 ssh_runner.go:195] Run: systemctl --version
I1101 08:40:10.961961   42644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-173309
I1101 08:40:10.994772   42644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/functional-173309/id_rsa Username:docker}
I1101 08:40:11.123526   42644 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-173309 ssh pgrep buildkitd: exit status 1 (295.797337ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image build -t localhost/my-image:functional-173309 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 image build -t localhost/my-image:functional-173309 testdata/build --alsologtostderr: (3.431288256s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-173309 image build -t localhost/my-image:functional-173309 testdata/build --alsologtostderr:
I1101 08:40:11.743517   42851 out.go:360] Setting OutFile to fd 1 ...
I1101 08:40:11.743784   42851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:11.743799   42851 out.go:374] Setting ErrFile to fd 2...
I1101 08:40:11.743805   42851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:40:11.744118   42851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
I1101 08:40:11.744769   42851 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:11.746747   42851 config.go:182] Loaded profile config "functional-173309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 08:40:11.747269   42851 cli_runner.go:164] Run: docker container inspect functional-173309 --format={{.State.Status}}
I1101 08:40:11.765754   42851 ssh_runner.go:195] Run: systemctl --version
I1101 08:40:11.765810   42851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-173309
I1101 08:40:11.784086   42851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/functional-173309/id_rsa Username:docker}
I1101 08:40:11.888436   42851 build_images.go:162] Building image from path: /tmp/build.1582146337.tar
I1101 08:40:11.888544   42851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 08:40:11.896631   42851 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1582146337.tar
I1101 08:40:11.900279   42851 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1582146337.tar: stat -c "%s %y" /var/lib/minikube/build/build.1582146337.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1582146337.tar': No such file or directory
I1101 08:40:11.900311   42851 ssh_runner.go:362] scp /tmp/build.1582146337.tar --> /var/lib/minikube/build/build.1582146337.tar (3072 bytes)
I1101 08:40:11.918770   42851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1582146337
I1101 08:40:11.927667   42851 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1582146337 -xf /var/lib/minikube/build/build.1582146337.tar
I1101 08:40:11.936182   42851 containerd.go:394] Building image: /var/lib/minikube/build/build.1582146337
I1101 08:40:11.936278   42851 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1582146337 --local dockerfile=/var/lib/minikube/build/build.1582146337 --output type=image,name=localhost/my-image:functional-173309
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4dcd18c535b6aeb5108f803d4af2ac0850c240d9a91763da099f11e184c9e8dc 0.0s done
#8 exporting config sha256:dc13ee6605c3f0b60ad74e7ef5ff56df95d28f655a2a8bfd9c7451c289f96ef0 0.0s done
#8 naming to localhost/my-image:functional-173309 done
#8 DONE 0.2s
I1101 08:40:15.087974   42851 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1582146337 --local dockerfile=/var/lib/minikube/build/build.1582146337 --output type=image,name=localhost/my-image:functional-173309: (3.151661912s)
I1101 08:40:15.088068   42851 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1582146337
I1101 08:40:15.099720   42851 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1582146337.tar
I1101 08:40:15.111991   42851 build_images.go:218] Built localhost/my-image:functional-173309 from /tmp/build.1582146337.tar
I1101 08:40:15.112032   42851 build_images.go:134] succeeded building to: functional-173309
I1101 08:40:15.112037   42851 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-173309
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image load --daemon kicbase/echo-server:functional-173309 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 image load --daemon kicbase/echo-server:functional-173309 --alsologtostderr: (1.02442817s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image load --daemon kicbase/echo-server:functional-173309 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-173309 image load --daemon kicbase/echo-server:functional-173309 --alsologtostderr: (1.00285902s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-173309
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image load --daemon kicbase/echo-server:functional-173309 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image save kicbase/echo-server:functional-173309 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image rm kicbase/echo-server:functional-173309 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-173309
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-173309 image save --daemon kicbase/echo-server:functional-173309 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-173309
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-173309
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-173309
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-173309
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (184.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1101 08:42:19.197566    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:42:46.903792    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m3.357658525s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (184.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 kubectl -- rollout status deployment/busybox: (4.585986078s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-kjsfx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-sgq9d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-z7n4s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-kjsfx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-sgq9d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-z7n4s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-kjsfx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-sgq9d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-z7n4s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-kjsfx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-kjsfx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-sgq9d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-sgq9d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-z7n4s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 kubectl -- exec busybox-7b57f96db7-z7n4s -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 node add --alsologtostderr -v 5
E1101 08:44:24.518466    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:24.524867    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:24.536450    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:24.557810    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:24.599185    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:24.680486    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:24.842203    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:25.163877    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:25.805560    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:27.087186    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:44:29.648987    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 node add --alsologtostderr -v 5: (1m0.73315841s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5: (1.090264675s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-714619 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.109042944s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --output json --alsologtostderr -v 5
E1101 08:44:34.771044    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 status --output json --alsologtostderr -v 5: (1.063951809s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp testdata/cp-test.txt ha-714619:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183633303/001/cp-test_ha-714619.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619:/home/docker/cp-test.txt ha-714619-m02:/home/docker/cp-test_ha-714619_ha-714619-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test_ha-714619_ha-714619-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619:/home/docker/cp-test.txt ha-714619-m03:/home/docker/cp-test_ha-714619_ha-714619-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test_ha-714619_ha-714619-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619:/home/docker/cp-test.txt ha-714619-m04:/home/docker/cp-test_ha-714619_ha-714619-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test_ha-714619_ha-714619-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp testdata/cp-test.txt ha-714619-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183633303/001/cp-test_ha-714619-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m02:/home/docker/cp-test.txt ha-714619:/home/docker/cp-test_ha-714619-m02_ha-714619.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test_ha-714619-m02_ha-714619.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m02:/home/docker/cp-test.txt ha-714619-m03:/home/docker/cp-test_ha-714619-m02_ha-714619-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test_ha-714619-m02_ha-714619-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m02:/home/docker/cp-test.txt ha-714619-m04:/home/docker/cp-test_ha-714619-m02_ha-714619-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test_ha-714619-m02_ha-714619-m04.txt"
E1101 08:44:45.025908    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp testdata/cp-test.txt ha-714619-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183633303/001/cp-test_ha-714619-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m03:/home/docker/cp-test.txt ha-714619:/home/docker/cp-test_ha-714619-m03_ha-714619.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test_ha-714619-m03_ha-714619.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m03:/home/docker/cp-test.txt ha-714619-m02:/home/docker/cp-test_ha-714619-m03_ha-714619-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test_ha-714619-m03_ha-714619-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m03:/home/docker/cp-test.txt ha-714619-m04:/home/docker/cp-test_ha-714619-m03_ha-714619-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test_ha-714619-m03_ha-714619-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp testdata/cp-test.txt ha-714619-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183633303/001/cp-test_ha-714619-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m04:/home/docker/cp-test.txt ha-714619:/home/docker/cp-test_ha-714619-m04_ha-714619.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619 "sudo cat /home/docker/cp-test_ha-714619-m04_ha-714619.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m04:/home/docker/cp-test.txt ha-714619-m02:/home/docker/cp-test_ha-714619-m04_ha-714619-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m02 "sudo cat /home/docker/cp-test_ha-714619-m04_ha-714619-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 cp ha-714619-m04:/home/docker/cp-test.txt ha-714619-m03:/home/docker/cp-test_ha-714619-m04_ha-714619-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 ssh -n ha-714619-m03 "sudo cat /home/docker/cp-test_ha-714619-m04_ha-714619-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 node stop m02 --alsologtostderr -v 5
E1101 08:45:05.507972    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 node stop m02 --alsologtostderr -v 5: (12.152445175s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5: exit status 7 (803.710875ms)

                                                
                                                
-- stdout --
	ha-714619
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-714619-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714619-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-714619-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:45:07.384470   59390 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:45:07.384684   59390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:45:07.384715   59390 out.go:374] Setting ErrFile to fd 2...
	I1101 08:45:07.384735   59390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:45:07.385015   59390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 08:45:07.385241   59390 out.go:368] Setting JSON to false
	I1101 08:45:07.385309   59390 mustload.go:66] Loading cluster: ha-714619
	I1101 08:45:07.385388   59390 notify.go:221] Checking for updates...
	I1101 08:45:07.385843   59390 config.go:182] Loaded profile config "ha-714619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 08:45:07.385887   59390 status.go:174] checking status of ha-714619 ...
	I1101 08:45:07.386757   59390 cli_runner.go:164] Run: docker container inspect ha-714619 --format={{.State.Status}}
	I1101 08:45:07.407863   59390 status.go:371] ha-714619 host status = "Running" (err=<nil>)
	I1101 08:45:07.407890   59390 host.go:66] Checking if "ha-714619" exists ...
	I1101 08:45:07.408210   59390 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-714619
	I1101 08:45:07.443589   59390 host.go:66] Checking if "ha-714619" exists ...
	I1101 08:45:07.443877   59390 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:45:07.443921   59390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-714619
	I1101 08:45:07.462843   59390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/ha-714619/id_rsa Username:docker}
	I1101 08:45:07.568475   59390 ssh_runner.go:195] Run: systemctl --version
	I1101 08:45:07.576000   59390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:07.590858   59390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:45:07.657575   59390 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-01 08:45:07.648162298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:45:07.658187   59390 kubeconfig.go:125] found "ha-714619" server: "https://192.168.49.254:8443"
	I1101 08:45:07.658233   59390 api_server.go:166] Checking apiserver status ...
	I1101 08:45:07.658276   59390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:45:07.671592   59390 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I1101 08:45:07.680717   59390 api_server.go:182] apiserver freezer: "10:freezer:/docker/c01f68b96e4c780ac30c2700a2694c92bb629ffe7e00167a6baf9758c6403445/kubepods/burstable/pod6ca057e5b8dff0107dfe7914c05e021c/9b1c8df6e58f92c945cd077e62298d8b2694ad127112317743215e527ae03177"
	I1101 08:45:07.680798   59390 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c01f68b96e4c780ac30c2700a2694c92bb629ffe7e00167a6baf9758c6403445/kubepods/burstable/pod6ca057e5b8dff0107dfe7914c05e021c/9b1c8df6e58f92c945cd077e62298d8b2694ad127112317743215e527ae03177/freezer.state
	I1101 08:45:07.688612   59390 api_server.go:204] freezer state: "THAWED"
	I1101 08:45:07.688638   59390 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 08:45:07.697380   59390 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 08:45:07.697408   59390 status.go:463] ha-714619 apiserver status = Running (err=<nil>)
	I1101 08:45:07.697419   59390 status.go:176] ha-714619 status: &{Name:ha-714619 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:45:07.697436   59390 status.go:174] checking status of ha-714619-m02 ...
	I1101 08:45:07.697797   59390 cli_runner.go:164] Run: docker container inspect ha-714619-m02 --format={{.State.Status}}
	I1101 08:45:07.715557   59390 status.go:371] ha-714619-m02 host status = "Stopped" (err=<nil>)
	I1101 08:45:07.715585   59390 status.go:384] host is not running, skipping remaining checks
	I1101 08:45:07.715592   59390 status.go:176] ha-714619-m02 status: &{Name:ha-714619-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:45:07.715612   59390 status.go:174] checking status of ha-714619-m03 ...
	I1101 08:45:07.715912   59390 cli_runner.go:164] Run: docker container inspect ha-714619-m03 --format={{.State.Status}}
	I1101 08:45:07.732237   59390 status.go:371] ha-714619-m03 host status = "Running" (err=<nil>)
	I1101 08:45:07.732267   59390 host.go:66] Checking if "ha-714619-m03" exists ...
	I1101 08:45:07.732562   59390 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-714619-m03
	I1101 08:45:07.751902   59390 host.go:66] Checking if "ha-714619-m03" exists ...
	I1101 08:45:07.752202   59390 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:45:07.752247   59390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-714619-m03
	I1101 08:45:07.773354   59390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/ha-714619-m03/id_rsa Username:docker}
	I1101 08:45:07.878951   59390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:07.893511   59390 kubeconfig.go:125] found "ha-714619" server: "https://192.168.49.254:8443"
	I1101 08:45:07.893537   59390 api_server.go:166] Checking apiserver status ...
	I1101 08:45:07.893582   59390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:45:07.906972   59390 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	I1101 08:45:07.919040   59390 api_server.go:182] apiserver freezer: "10:freezer:/docker/44348588a2189927729f0bc7cfd34d5ba784ae0655c4d654aa4d6aad43b86c0b/kubepods/burstable/podf8fe49967fc8a16a304b15e1e53fc0d7/c89c5042b6d9dbc92a3aca25ef5b8cdeafb2eae16db92f6f6c61c7f6af293b81"
	I1101 08:45:07.919113   59390 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/44348588a2189927729f0bc7cfd34d5ba784ae0655c4d654aa4d6aad43b86c0b/kubepods/burstable/podf8fe49967fc8a16a304b15e1e53fc0d7/c89c5042b6d9dbc92a3aca25ef5b8cdeafb2eae16db92f6f6c61c7f6af293b81/freezer.state
	I1101 08:45:07.928243   59390 api_server.go:204] freezer state: "THAWED"
	I1101 08:45:07.928274   59390 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 08:45:07.936518   59390 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 08:45:07.936544   59390 status.go:463] ha-714619-m03 apiserver status = Running (err=<nil>)
	I1101 08:45:07.936554   59390 status.go:176] ha-714619-m03 status: &{Name:ha-714619-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:45:07.936570   59390 status.go:174] checking status of ha-714619-m04 ...
	I1101 08:45:07.936861   59390 cli_runner.go:164] Run: docker container inspect ha-714619-m04 --format={{.State.Status}}
	I1101 08:45:07.954494   59390 status.go:371] ha-714619-m04 host status = "Running" (err=<nil>)
	I1101 08:45:07.954523   59390 host.go:66] Checking if "ha-714619-m04" exists ...
	I1101 08:45:07.954823   59390 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-714619-m04
	I1101 08:45:07.972993   59390 host.go:66] Checking if "ha-714619-m04" exists ...
	I1101 08:45:07.973393   59390 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:45:07.973440   59390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-714619-m04
	I1101 08:45:07.993424   59390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/ha-714619-m04/id_rsa Username:docker}
	I1101 08:45:08.105922   59390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:08.126951   59390 status.go:176] ha-714619-m04 status: &{Name:ha-714619-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 node start m02 --alsologtostderr -v 5: (13.502709919s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5: (1.423150918s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.377709563s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 stop --alsologtostderr -v 5
E1101 08:45:46.469306    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 stop --alsologtostderr -v 5: (37.419906984s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 start --wait true --alsologtostderr -v 5: (1m0.04793434s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 node delete m03 --alsologtostderr -v 5
E1101 08:47:08.391064    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 node delete m03 --alsologtostderr -v 5: (9.884089516s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 stop --alsologtostderr -v 5
E1101 08:47:19.196965    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 stop --alsologtostderr -v 5: (36.229797316s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5: exit status 7 (116.279564ms)

                                                
                                                
-- stdout --
	ha-714619
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714619-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714619-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:47:50.999862   74455 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:47:51.000070   74455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:47:51.000103   74455 out.go:374] Setting ErrFile to fd 2...
	I1101 08:47:51.000123   74455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:47:51.000420   74455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 08:47:51.000699   74455 out.go:368] Setting JSON to false
	I1101 08:47:51.000768   74455 mustload.go:66] Loading cluster: ha-714619
	I1101 08:47:51.000863   74455 notify.go:221] Checking for updates...
	I1101 08:47:51.001253   74455 config.go:182] Loaded profile config "ha-714619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 08:47:51.001296   74455 status.go:174] checking status of ha-714619 ...
	I1101 08:47:51.002209   74455 cli_runner.go:164] Run: docker container inspect ha-714619 --format={{.State.Status}}
	I1101 08:47:51.024498   74455 status.go:371] ha-714619 host status = "Stopped" (err=<nil>)
	I1101 08:47:51.024523   74455 status.go:384] host is not running, skipping remaining checks
	I1101 08:47:51.024530   74455 status.go:176] ha-714619 status: &{Name:ha-714619 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:47:51.024568   74455 status.go:174] checking status of ha-714619-m02 ...
	I1101 08:47:51.024894   74455 cli_runner.go:164] Run: docker container inspect ha-714619-m02 --format={{.State.Status}}
	I1101 08:47:51.043868   74455 status.go:371] ha-714619-m02 host status = "Stopped" (err=<nil>)
	I1101 08:47:51.043907   74455 status.go:384] host is not running, skipping remaining checks
	I1101 08:47:51.043913   74455 status.go:176] ha-714619-m02 status: &{Name:ha-714619-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:47:51.043932   74455 status.go:174] checking status of ha-714619-m04 ...
	I1101 08:47:51.044250   74455 cli_runner.go:164] Run: docker container inspect ha-714619-m04 --format={{.State.Status}}
	I1101 08:47:51.063745   74455 status.go:371] ha-714619-m04 host status = "Stopped" (err=<nil>)
	I1101 08:47:51.063771   74455 status.go:384] host is not running, skipping remaining checks
	I1101 08:47:51.063778   74455 status.go:176] ha-714619-m04 status: &{Name:ha-714619-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.430650298s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (51.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 node add --control-plane --alsologtostderr -v 5
E1101 08:49:24.521931    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 node add --control-plane --alsologtostderr -v 5: (50.423338963s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-714619 status --alsologtostderr -v 5: (1.134450708s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (51.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.122062879s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-948144 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-948144 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m23.913851764s)
--- PASS: TestJSONOutput/start/Command (83.92s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-948144 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-948144 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-948144 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-948144 --output=json --user=testUser: (5.971321248s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-922522 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-922522 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.979745ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9f493cea-b8ce-4c53-8b07-9d5e33c07545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-922522] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f64e59e7-d3a4-44ab-ba4a-7c65b6ee3339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21835"}}
	{"specversion":"1.0","id":"5c80280c-9230-430a-97f4-a8dfac49c421","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b9f7d44f-3caa-4a6b-aa47-c00df1246588","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig"}}
	{"specversion":"1.0","id":"5897658c-2f2c-43e4-8e2c-8f62ef64e10e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube"}}
	{"specversion":"1.0","id":"3c72f9ce-4c6e-48db-8c9e-c80ebee4a529","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cf6e9187-d56b-401e-be52-6ea68beb666b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bf9891c7-4e41-4559-88d8-bd1fc0910661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-922522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-922522
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-069165 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-069165 --network=: (36.109065041s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-069165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-069165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-069165: (2.139904305s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-516411 --network=bridge
E1101 08:52:19.197001    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-516411 --network=bridge: (36.627153814s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-516411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-516411
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-516411: (2.045452241s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.70s)

                                                
                                    
x
+
TestKicExistingNetwork (38.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 08:52:48.103279    4107 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 08:52:48.119666    4107 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 08:52:48.119745    4107 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 08:52:48.119763    4107 cli_runner.go:164] Run: docker network inspect existing-network
W1101 08:52:48.135633    4107 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 08:52:48.135664    4107 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 08:52:48.135680    4107 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 08:52:48.135779    4107 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 08:52:48.152294    4107 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-519f9941df81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:56:5d:1d:ec:84} reservation:<nil>}
I1101 08:52:48.153418    4107 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018833e0}
I1101 08:52:48.153450    4107 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 08:52:48.153504    4107 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 08:52:48.214670    4107 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-694041 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-694041 --network=existing-network: (36.116463852s)
helpers_test.go:175: Cleaning up "existing-network-694041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-694041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-694041: (2.041142677s)
I1101 08:53:26.391071    4107 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.30s)

                                                
                                    
x
+
TestKicCustomSubnet (37.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-142625 --subnet=192.168.60.0/24
E1101 08:53:42.265842    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-142625 --subnet=192.168.60.0/24: (34.947602555s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-142625 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-142625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-142625
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-142625: (2.234378655s)
--- PASS: TestKicCustomSubnet (37.21s)

                                                
                                    
x
+
TestKicStaticIP (36.88s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-457220 --static-ip=192.168.200.200
E1101 08:54:24.521826    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-457220 --static-ip=192.168.200.200: (34.489853715s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-457220 ip
helpers_test.go:175: Cleaning up "static-ip-457220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-457220
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-457220: (2.23146617s)
--- PASS: TestKicStaticIP (36.88s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-676394 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-676394 --driver=docker  --container-runtime=containerd: (32.866346537s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-679192 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-679192 --driver=docker  --container-runtime=containerd: (33.108593364s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-676394
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-679192
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-679192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-679192
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-679192: (2.14139435s)
helpers_test.go:175: Cleaning up "first-676394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-676394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-676394: (2.024195715s)
--- PASS: TestMinikubeProfile (71.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-573946 --memory=3072 --mount-string /tmp/TestMountStartserial1200575027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-573946 --memory=3072 --mount-string /tmp/TestMountStartserial1200575027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.86721898s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-573946 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-576079 --memory=3072 --mount-string /tmp/TestMountStartserial1200575027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-576079 --memory=3072 --mount-string /tmp/TestMountStartserial1200575027/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.211766726s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-576079 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-573946 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-573946 --alsologtostderr -v=5: (1.733279175s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-576079 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-576079
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-576079: (1.290804847s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-576079
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-576079: (7.039188472s)
--- PASS: TestMountStart/serial/RestartStopped (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-576079 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819483 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1101 08:57:19.197457    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819483 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m13.85186687s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-819483 -- rollout status deployment/busybox: (3.747946841s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-rnqt2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-svznf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-rnqt2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-svznf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-rnqt2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-svznf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-rnqt2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-rnqt2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-svznf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819483 -- exec busybox-7b57f96db7-svznf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-819483 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-819483 -v=5 --alsologtostderr: (28.71615743s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-819483 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp testdata/cp-test.txt multinode-819483:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3417082155/001/cp-test_multinode-819483.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483:/home/docker/cp-test.txt multinode-819483-m02:/home/docker/cp-test_multinode-819483_multinode-819483-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m02 "sudo cat /home/docker/cp-test_multinode-819483_multinode-819483-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483:/home/docker/cp-test.txt multinode-819483-m03:/home/docker/cp-test_multinode-819483_multinode-819483-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m03 "sudo cat /home/docker/cp-test_multinode-819483_multinode-819483-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp testdata/cp-test.txt multinode-819483-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3417082155/001/cp-test_multinode-819483-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483-m02:/home/docker/cp-test.txt multinode-819483:/home/docker/cp-test_multinode-819483-m02_multinode-819483.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483 "sudo cat /home/docker/cp-test_multinode-819483-m02_multinode-819483.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483-m02:/home/docker/cp-test.txt multinode-819483-m03:/home/docker/cp-test_multinode-819483-m02_multinode-819483-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m03 "sudo cat /home/docker/cp-test_multinode-819483-m02_multinode-819483-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp testdata/cp-test.txt multinode-819483-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3417082155/001/cp-test_multinode-819483-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483-m03:/home/docker/cp-test.txt multinode-819483:/home/docker/cp-test_multinode-819483-m03_multinode-819483.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m03 "sudo cat /home/docker/cp-test.txt"
E1101 08:59:24.519213    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483 "sudo cat /home/docker/cp-test_multinode-819483-m03_multinode-819483.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 cp multinode-819483-m03:/home/docker/cp-test.txt multinode-819483-m02:/home/docker/cp-test_multinode-819483-m03_multinode-819483-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 ssh -n multinode-819483-m02 "sudo cat /home/docker/cp-test_multinode-819483-m03_multinode-819483-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-819483 node stop m03: (1.318851883s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819483 status: exit status 7 (528.815124ms)

                                                
                                                
-- stdout --
	multinode-819483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-819483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-819483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr: exit status 7 (551.356613ms)

                                                
                                                
-- stdout --
	multinode-819483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-819483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-819483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:59:27.750553  127961 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:59:27.750683  127961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:59:27.750694  127961 out.go:374] Setting ErrFile to fd 2...
	I1101 08:59:27.750700  127961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:59:27.750941  127961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 08:59:27.751129  127961 out.go:368] Setting JSON to false
	I1101 08:59:27.751166  127961 mustload.go:66] Loading cluster: multinode-819483
	I1101 08:59:27.751261  127961 notify.go:221] Checking for updates...
	I1101 08:59:27.751549  127961 config.go:182] Loaded profile config "multinode-819483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 08:59:27.751566  127961 status.go:174] checking status of multinode-819483 ...
	I1101 08:59:27.752401  127961 cli_runner.go:164] Run: docker container inspect multinode-819483 --format={{.State.Status}}
	I1101 08:59:27.772382  127961 status.go:371] multinode-819483 host status = "Running" (err=<nil>)
	I1101 08:59:27.772412  127961 host.go:66] Checking if "multinode-819483" exists ...
	I1101 08:59:27.772755  127961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-819483
	I1101 08:59:27.796000  127961 host.go:66] Checking if "multinode-819483" exists ...
	I1101 08:59:27.796308  127961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:59:27.796361  127961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-819483
	I1101 08:59:27.816235  127961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/multinode-819483/id_rsa Username:docker}
	I1101 08:59:27.927411  127961 ssh_runner.go:195] Run: systemctl --version
	I1101 08:59:27.933858  127961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:59:27.946450  127961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:59:28.005157  127961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 08:59:27.995165699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:59:28.005800  127961 kubeconfig.go:125] found "multinode-819483" server: "https://192.168.67.2:8443"
	I1101 08:59:28.005845  127961 api_server.go:166] Checking apiserver status ...
	I1101 08:59:28.005890  127961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:59:28.019080  127961 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I1101 08:59:28.027754  127961 api_server.go:182] apiserver freezer: "10:freezer:/docker/d60263befc7c47ff8d617d267cfc5ee4401c9a110802ce3f8f84e3bbe9c70aa3/kubepods/burstable/pod4bb6a16a9623c1296efc0441647915c6/2e223f3d3986471b199d5d159c4c476a4592832b47b91a61ac6cc59b7a8f7f81"
	I1101 08:59:28.027836  127961 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d60263befc7c47ff8d617d267cfc5ee4401c9a110802ce3f8f84e3bbe9c70aa3/kubepods/burstable/pod4bb6a16a9623c1296efc0441647915c6/2e223f3d3986471b199d5d159c4c476a4592832b47b91a61ac6cc59b7a8f7f81/freezer.state
	I1101 08:59:28.036149  127961 api_server.go:204] freezer state: "THAWED"
	I1101 08:59:28.036184  127961 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 08:59:28.044513  127961 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 08:59:28.044539  127961 status.go:463] multinode-819483 apiserver status = Running (err=<nil>)
	I1101 08:59:28.044563  127961 status.go:176] multinode-819483 status: &{Name:multinode-819483 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:59:28.044586  127961 status.go:174] checking status of multinode-819483-m02 ...
	I1101 08:59:28.044942  127961 cli_runner.go:164] Run: docker container inspect multinode-819483-m02 --format={{.State.Status}}
	I1101 08:59:28.062031  127961 status.go:371] multinode-819483-m02 host status = "Running" (err=<nil>)
	I1101 08:59:28.062074  127961 host.go:66] Checking if "multinode-819483-m02" exists ...
	I1101 08:59:28.062385  127961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-819483-m02
	I1101 08:59:28.079441  127961 host.go:66] Checking if "multinode-819483-m02" exists ...
	I1101 08:59:28.079760  127961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:59:28.079809  127961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-819483-m02
	I1101 08:59:28.103741  127961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21835-2307/.minikube/machines/multinode-819483-m02/id_rsa Username:docker}
	I1101 08:59:28.206869  127961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:59:28.219883  127961 status.go:176] multinode-819483-m02 status: &{Name:multinode-819483-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:59:28.219917  127961 status.go:174] checking status of multinode-819483-m03 ...
	I1101 08:59:28.220224  127961 cli_runner.go:164] Run: docker container inspect multinode-819483-m03 --format={{.State.Status}}
	I1101 08:59:28.237384  127961 status.go:371] multinode-819483-m03 host status = "Stopped" (err=<nil>)
	I1101 08:59:28.237415  127961 status.go:384] host is not running, skipping remaining checks
	I1101 08:59:28.237423  127961 status.go:176] multinode-819483-m03 status: &{Name:multinode-819483-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-819483 node start m03 -v=5 --alsologtostderr: (7.113796166s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-819483
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-819483
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-819483: (25.714334951s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819483 --wait=true -v=5 --alsologtostderr
E1101 09:00:47.593878    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819483 --wait=true -v=5 --alsologtostderr: (48.723778772s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-819483
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-819483 node delete m03: (4.996205958s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-819483 stop: (23.919727409s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819483 status: exit status 7 (88.506578ms)

                                                
                                                
-- stdout --
	multinode-819483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-819483-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr: exit status 7 (94.663012ms)

                                                
                                                
-- stdout --
	multinode-819483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-819483-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:01:20.494013  136785 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:01:20.494200  136785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:01:20.494226  136785 out.go:374] Setting ErrFile to fd 2...
	I1101 09:01:20.494245  136785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:01:20.494553  136785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 09:01:20.494803  136785 out.go:368] Setting JSON to false
	I1101 09:01:20.494867  136785 mustload.go:66] Loading cluster: multinode-819483
	I1101 09:01:20.494947  136785 notify.go:221] Checking for updates...
	I1101 09:01:20.495330  136785 config.go:182] Loaded profile config "multinode-819483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 09:01:20.495366  136785 status.go:174] checking status of multinode-819483 ...
	I1101 09:01:20.496211  136785 cli_runner.go:164] Run: docker container inspect multinode-819483 --format={{.State.Status}}
	I1101 09:01:20.515897  136785 status.go:371] multinode-819483 host status = "Stopped" (err=<nil>)
	I1101 09:01:20.515922  136785 status.go:384] host is not running, skipping remaining checks
	I1101 09:01:20.515929  136785 status.go:176] multinode-819483 status: &{Name:multinode-819483 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:01:20.515953  136785 status.go:174] checking status of multinode-819483-m02 ...
	I1101 09:01:20.516275  136785 cli_runner.go:164] Run: docker container inspect multinode-819483-m02 --format={{.State.Status}}
	I1101 09:01:20.539650  136785 status.go:371] multinode-819483-m02 host status = "Stopped" (err=<nil>)
	I1101 09:01:20.539677  136785 status.go:384] host is not running, skipping remaining checks
	I1101 09:01:20.539684  136785 status.go:176] multinode-819483-m02 status: &{Name:multinode-819483-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819483 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819483 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.822164858s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819483 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-819483
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819483-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-819483-m02 --driver=docker  --container-runtime=containerd: exit status 14 (106.314061ms)

                                                
                                                
-- stdout --
	* [multinode-819483-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-819483-m02' is duplicated with machine name 'multinode-819483-m02' in profile 'multinode-819483'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819483-m03 --driver=docker  --container-runtime=containerd
E1101 09:02:19.197853    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819483-m03 --driver=docker  --container-runtime=containerd: (34.975898342s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-819483
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-819483: exit status 80 (365.724189ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-819483 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-819483-m03 already exists in multinode-819483-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-819483-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-819483-m03: (2.093769539s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.59s)

                                                
                                    
x
+
TestPreload (121.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-208683 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-208683 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m0.189746687s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-208683 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-208683 image pull gcr.io/k8s-minikube/busybox: (2.571420811s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-208683
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-208683: (6.228170481s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-208683 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1101 09:04:24.518352    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-208683 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.25956698s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-208683 image list
helpers_test.go:175: Cleaning up "test-preload-208683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-208683
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-208683: (2.442719541s)
--- PASS: TestPreload (121.92s)

                                                
                                    
x
+
TestInsufficientStorage (12.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-890552 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-890552 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.090750993s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0c59e1bc-eb5d-4b09-8b16-abf9d7a1a6e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-890552] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"850e2255-771a-4fb0-95cf-c3543082823e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21835"}}
	{"specversion":"1.0","id":"62d34946-9e73-4eca-9fd8-8504bf966acb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d223893-fdf5-4d49-82f8-ed1c85a78016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig"}}
	{"specversion":"1.0","id":"953c9f83-54b1-4387-9c3a-a8535db8b1c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube"}}
	{"specversion":"1.0","id":"4a061042-bb59-4d10-9006-9e7c41a692ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"64e67359-6113-4995-a846-9bf7d78e11ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"609f7dd2-14d7-4079-9169-98ed4f01cdd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"933118ca-a0d9-425e-8e48-ebc36a9e3168","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9821962e-ee47-4dc3-b6f3-92ad72da4bf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d4d294c-167a-4a63-9e9d-425d6ee39533","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"48c31f45-27d3-424c-a54e-6ea3ebefd716","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-890552\" primary control-plane node in \"insufficient-storage-890552\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3203fa9-cdff-4790-bacc-9511a04b76d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"22272cd0-3e61-443e-aeb8-641e53282d35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5fa7647-a0d0-4842-9f5a-550408f05f06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-890552 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-890552 --output=json --layout=cluster: exit status 7 (291.397698ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-890552","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-890552","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:05:50.766347  155121 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-890552" does not appear in /home/jenkins/minikube-integration/21835-2307/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-890552 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-890552 --output=json --layout=cluster: exit status 7 (305.678355ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-890552","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-890552","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:05:51.073391  155187 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-890552" does not appear in /home/jenkins/minikube-integration/21835-2307/kubeconfig
	E1101 09:05:51.083740  155187 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/insufficient-storage-890552/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-890552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-890552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-890552: (1.943191878s)
--- PASS: TestInsufficientStorage (12.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3473882369 start -p running-upgrade-208182 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3473882369 start -p running-upgrade-208182 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (38.785462677s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-208182 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-208182 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.707838753s)
helpers_test.go:175: Cleaning up "running-upgrade-208182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-208182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-208182: (2.026287605s)
--- PASS: TestRunningBinaryUpgrade (72.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (102.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1101 09:07:19.198233    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.664876723s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-881084
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-881084: (1.395175459s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-881084 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-881084 status --format={{.Host}}: exit status 7 (168.447008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.939523646s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-881084 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (116.708295ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-881084] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-881084
	    minikube start -p kubernetes-upgrade-881084 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8810842 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-881084 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-881084 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.773880129s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-881084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-881084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-881084: (2.513502074s)
--- PASS: TestKubernetesUpgrade (102.75s)

                                                
                                    
x
+
TestMissingContainerUpgrade (141.51s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3392625922 start -p missing-upgrade-765973 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3392625922 start -p missing-upgrade-765973 --memory=3072 --driver=docker  --container-runtime=containerd: (1m1.537654098s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-765973
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-765973
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-765973 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-765973 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.227893686s)
helpers_test.go:175: Cleaning up "missing-upgrade-765973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-765973
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-765973: (5.700049467s)
--- PASS: TestMissingContainerUpgrade (141.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185745 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-185745 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (104.131679ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-185745] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185745 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185745 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.132623258s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-185745 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185745 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185745 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.178596837s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-185745 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-185745 status -o json: exit status 2 (464.160147ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-185745","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-185745
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-185745: (2.153343887s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185745 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185745 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.144566137s)
--- PASS: TestNoKubernetes/serial/Start (8.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-185745 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-185745 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.986857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-185745
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-185745: (1.281430111s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185745 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185745 --driver=docker  --container-runtime=containerd: (6.653632354s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-185745 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-185745 "sudo systemctl is-active --quiet service kubelet": exit status 1 (352.972416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.302466207 start -p stopped-upgrade-196329 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.302466207 start -p stopped-upgrade-196329 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (40.616620597s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.302466207 -p stopped-upgrade-196329 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.302466207 -p stopped-upgrade-196329 stop: (1.3293698s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-196329 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-196329 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.262670401s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-196329
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-196329: (1.933178293s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.93s)

                                                
                                    
x
+
TestPause/serial/Start (85.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-440445 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1101 09:09:24.518307    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-440445 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m25.191014377s)
--- PASS: TestPause/serial/Start (85.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-440445 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-440445 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.924296251s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-351332 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-351332 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (207.810098ms)

                                                
                                                
-- stdout --
	* [false-351332] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:10:55.865655  189001 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:10:55.865855  189001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:10:55.865866  189001 out.go:374] Setting ErrFile to fd 2...
	I1101 09:10:55.865872  189001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:10:55.866119  189001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2307/.minikube/bin
	I1101 09:10:55.866522  189001 out.go:368] Setting JSON to false
	I1101 09:10:55.867427  189001 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3205,"bootTime":1761985051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 09:10:55.867501  189001 start.go:143] virtualization:  
	I1101 09:10:55.871050  189001 out.go:179] * [false-351332] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:10:55.874944  189001 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:10:55.875138  189001 notify.go:221] Checking for updates...
	I1101 09:10:55.880927  189001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:10:55.883971  189001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2307/kubeconfig
	I1101 09:10:55.887052  189001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2307/.minikube
	I1101 09:10:55.889967  189001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:10:55.892928  189001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:10:55.896685  189001 config.go:182] Loaded profile config "pause-440445": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 09:10:55.896785  189001 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:10:55.932167  189001 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:10:55.932320  189001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:10:56.009246  189001 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:10:55.997484416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:10:56.009362  189001 docker.go:319] overlay module found
	I1101 09:10:56.012505  189001 out.go:179] * Using the docker driver based on user configuration
	I1101 09:10:56.015413  189001 start.go:309] selected driver: docker
	I1101 09:10:56.015442  189001 start.go:930] validating driver "docker" against <nil>
	I1101 09:10:56.015456  189001 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:10:56.019119  189001 out.go:203] 
	W1101 09:10:56.021969  189001 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1101 09:10:56.024838  189001 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-351332 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-351332" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:10:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-440445
contexts:
- context:
cluster: pause-440445
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:10:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-440445
name: pause-440445
current-context: pause-440445
kind: Config
preferences: {}
users:
- name: pause-440445
user:
client-certificate: /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/pause-440445/client.crt
client-key: /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/pause-440445/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-351332

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351332"

                                                
                                                
----------------------- debugLogs end: false-351332 [took: 4.574709966s] --------------------------------
helpers_test.go:175: Cleaning up "false-351332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-351332
--- PASS: TestNetworkPlugins/group/false (4.99s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-440445 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-440445 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-440445 --output=json --layout=cluster: exit status 2 (402.519877ms)

                                                
                                                
-- stdout --
	{"Name":"pause-440445","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-440445","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-440445 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.08s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-440445 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-440445 --alsologtostderr -v=5: (1.077164368s)
--- PASS: TestPause/serial/PauseAgain (1.08s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-440445 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-440445 --alsologtostderr -v=5: (3.448417927s)
--- PASS: TestPause/serial/DeletePaused (3.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-440445
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-440445: exit status 1 (21.404314ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-440445: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (64.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-724526 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-724526 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m4.361782755s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (64.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-724526 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [df65d2bb-4762-469f-8695-aa4128822431] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [df65d2bb-4762-469f-8695-aa4128822431] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003569906s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-724526 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-724526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-724526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.192479978s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-724526 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-724526 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-724526 --alsologtostderr -v=3: (12.103181874s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-724526 -n old-k8s-version-724526
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-724526 -n old-k8s-version-724526: exit status 7 (69.655774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-724526 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-724526 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1101 09:14:24.519308    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-724526 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.061368497s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-724526 -n old-k8s-version-724526
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pkw4c" [14edccec-eb36-490a-8b1a-5be0ac9cf635] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.014174162s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pkw4c" [14edccec-eb36-490a-8b1a-5be0ac9cf635] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003832093s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-724526 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-491485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-491485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m9.321502651s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-724526 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-724526 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-724526 --alsologtostderr -v=1: (1.00354451s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-724526 -n old-k8s-version-724526
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-724526 -n old-k8s-version-724526: exit status 2 (399.32497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-724526 -n old-k8s-version-724526
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-724526 -n old-k8s-version-724526: exit status 2 (404.613549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-724526 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-724526 -n old-k8s-version-724526
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-724526 -n old-k8s-version-724526
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-632846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-632846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m33.106254316s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-491485 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a9c4e128-3834-42c2-a75d-26876b596c18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a9c4e128-3834-42c2-a75d-26876b596c18] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003413359s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-491485 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-491485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-491485 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-491485 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-491485 --alsologtostderr -v=3: (12.199187259s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491485 -n no-preload-491485
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491485 -n no-preload-491485: exit status 7 (78.432935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-491485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-491485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-491485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.54803128s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-491485 -n no-preload-491485
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-632846 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [67643a94-a493-4fe6-b7f7-adef0b577d21] Pending
helpers_test.go:352: "busybox" [67643a94-a493-4fe6-b7f7-adef0b577d21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [67643a94-a493-4fe6-b7f7-adef0b577d21] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003874483s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-632846 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-632846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-632846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.642569364s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-632846 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-632846 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-632846 --alsologtostderr -v=3: (12.726806362s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-632846 -n embed-certs-632846
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-632846 -n embed-certs-632846: exit status 7 (68.856383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-632846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-632846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1101 09:17:19.197042    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/addons-775283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:17:27.595837    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-632846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (49.529507879s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-632846 -n embed-certs-632846
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f9l98" [d644c7ef-2c36-4639-b00d-b0e92fa958f9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003731007s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f9l98" [d644c7ef-2c36-4639-b00d-b0e92fa958f9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002981181s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-491485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-491485 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-491485 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491485 -n no-preload-491485
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491485 -n no-preload-491485: exit status 2 (334.190197ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-491485 -n no-preload-491485
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-491485 -n no-preload-491485: exit status 2 (365.537536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-491485 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-491485 -n no-preload-491485
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-491485 -n no-preload-491485
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-285725 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-285725 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (58.788064143s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pvqd8" [4ba86ec1-d6d7-4c43-bfee-869bf44e6695] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004556319s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pvqd8" [4ba86ec1-d6d7-4c43-bfee-869bf44e6695] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004753919s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-632846 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-632846 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-632846 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-632846 -n embed-certs-632846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-632846 -n embed-certs-632846: exit status 2 (441.8726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-632846 -n embed-certs-632846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-632846 -n embed-certs-632846: exit status 2 (384.707182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-632846 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-632846 --alsologtostderr -v=1: (1.069483726s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-632846 -n embed-certs-632846
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-632846 -n embed-certs-632846
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-524048 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1101 09:18:35.633314    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:35.639696    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:35.651078    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:35.672439    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:35.713828    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:35.795194    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:35.956617    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:36.278290    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:36.920041    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:38.201297    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:18:40.762612    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-524048 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (39.901398063s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-285725 create -f testdata/busybox.yaml
E1101 09:18:45.883961    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d44bc29-3b1c-4baf-8c49-e355a6865b63] Pending
helpers_test.go:352: "busybox" [8d44bc29-3b1c-4baf-8c49-e355a6865b63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8d44bc29-3b1c-4baf-8c49-e355a6865b63] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003281259s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-285725 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-285725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1101 09:18:56.129971    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-285725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103595655s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-285725 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-285725 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-285725 --alsologtostderr -v=3: (12.493331592s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-524048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-524048 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-524048 --alsologtostderr -v=3: (1.340094041s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-524048 -n newest-cni-524048
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-524048 -n newest-cni-524048: exit status 7 (65.495135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-524048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-524048 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-524048 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (20.771595932s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-524048 -n newest-cni-524048
E1101 09:19:24.518923    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725: exit status 7 (90.643209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-285725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-285725 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1101 09:19:16.611578    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-285725 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (57.722150672s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-524048 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-524048 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-524048 -n newest-cni-524048
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-524048 -n newest-cni-524048: exit status 2 (540.667051ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-524048 -n newest-cni-524048
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-524048 -n newest-cni-524048: exit status 2 (519.571045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-524048 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-524048 --alsologtostderr -v=1: (1.099202072s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-524048 -n newest-cni-524048
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-524048 -n newest-cni-524048
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1101 09:19:57.573084    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m23.726481458s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-brk5c" [1b4911c7-f291-4888-a959-ba7edb515e6e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002785335s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-brk5c" [1b4911c7-f291-4888-a959-ba7edb515e6e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003729609s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-285725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-285725 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-285725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725: exit status 2 (347.484088ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725: exit status 2 (342.777999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-285725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-285725 -n default-k8s-diff-port-285725
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)
E1101 09:26:07.438096    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/auto-351332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:12.514801    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/no-preload-491485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:17.679679    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/auto-351332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:30.058283    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:38.161440    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/auto-351332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m24.34129859s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-351332 "pgrep -a kubelet"
I1101 09:20:56.881584    4107 config.go:182] Loaded profile config "auto-351332": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-351332 replace --force -f testdata/netcat-deployment.yaml
I1101 09:20:57.217264    4107 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2th79" [4cf5c8a9-7cf5-4643-8ef9-de35ad1ebcca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2th79" [4cf5c8a9-7cf5-4643-8ef9-de35ad1ebcca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003571095s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-351332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1101 09:21:33.008302    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/no-preload-491485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.524046557s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6wc29" [81273207-07f5-41d5-a3ae-575e99c0ff46] Running
E1101 09:21:53.489822    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/no-preload-491485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004480847s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-351332 "pgrep -a kubelet"
I1101 09:21:56.012849    4107 config.go:182] Loaded profile config "kindnet-351332": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-351332 replace --force -f testdata/netcat-deployment.yaml
I1101 09:21:56.371428    4107 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7lvsb" [68783e26-ccd8-4d8a-83d1-cc2dc3318265] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7lvsb" [68783e26-ccd8-4d8a-83d1-cc2dc3318265] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003086722s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-351332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m13.690402157s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-t4kdx" [66c1be6c-a098-4c51-8efa-f1ebc6eb4ebb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1101 09:22:34.451197    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/no-preload-491485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-t4kdx" [66c1be6c-a098-4c51-8efa-f1ebc6eb4ebb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003850166s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-351332 "pgrep -a kubelet"
I1101 09:22:40.002018    4107 config.go:182] Loaded profile config "calico-351332": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-351332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p9gnc" [5860584f-1ee5-4c4f-8272-da2379517746] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p9gnc" [5860584f-1ee5-4c4f-8272-da2379517746] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003713327s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-351332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1101 09:23:35.633264    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/old-k8s-version-724526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:46.194803    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:46.201060    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:46.212324    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:46.233629    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:46.274932    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:46.356255    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:46.518413    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m18.377117898s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-351332 "pgrep -a kubelet"
E1101 09:23:46.839723    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1101 09:23:47.122937    4107 config.go:182] Loaded profile config "custom-flannel-351332": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-351332 replace --force -f testdata/netcat-deployment.yaml
E1101 09:23:47.486886    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h4jvf" [ba1b5950-9f76-4de5-80bb-5bb4457732d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:23:48.768995    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:51.330400    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-h4jvf" [ba1b5950-9f76-4de5-80bb-5bb4457732d8] Running
E1101 09:23:56.372545    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/no-preload-491485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:23:56.452388    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.014109877s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-351332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1101 09:24:24.518926    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/functional-173309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:27.175343    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/default-k8s-diff-port-285725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.767882255s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-351332 "pgrep -a kubelet"
I1101 09:24:35.778450    4107 config.go:182] Loaded profile config "enable-default-cni-351332": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-351332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fnxxh" [4722114d-bc7a-44d2-a8b8-5128a0205d5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fnxxh" [4722114d-bc7a-44d2-a8b8-5128a0205d5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003867746s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-351332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-351332 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m28.683602672s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-q5zqq" [b61e23a9-c8dc-4bdf-8eb3-5c1155849fad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004234244s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-351332 "pgrep -a kubelet"
I1101 09:25:26.759164    4107 config.go:182] Loaded profile config "flannel-351332": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-351332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-md2b6" [b7926666-a4c5-4962-9fee-37f3485d0592] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-md2b6" [b7926666-a4c5-4962-9fee-37f3485d0592] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008228361s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-351332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-351332 "pgrep -a kubelet"
I1101 09:26:38.752716    4107 config.go:182] Loaded profile config "bridge-351332": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-351332 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hz5sj" [c35699a2-0d61-45ad-b9a5-1e65917d8c08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:26:40.214457    4107 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/no-preload-491485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hz5sj" [c35699a2-0d61-45ad-b9a5-1e65917d8c08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003962231s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-351332 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-351332 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    

Test skip (30/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-602806 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-602806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-602806
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-810487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-810487
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-351332 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-351332" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:10:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-440445
contexts:
- context:
cluster: pause-440445
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:10:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-440445
name: pause-440445
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-440445
user:
client-certificate: /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/pause-440445/client.crt
client-key: /home/jenkins/minikube-integration/21835-2307/.minikube/profiles/pause-440445/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-351332

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351332"

                                                
                                                
----------------------- debugLogs end: kubenet-351332 [took: 4.492477431s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-351332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-351332
--- SKIP: TestNetworkPlugins/group/kubenet (4.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-351332 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-351332" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-351332

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-351332" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351332"

                                                
                                                
----------------------- debugLogs end: cilium-351332 [took: 4.966342432s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-351332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-351332
--- SKIP: TestNetworkPlugins/group/cilium (5.17s)

                                                
                                    
Copied to clipboard