Test Report: Docker_Linux_docker_arm64 21504

                    
                      3892f90e7d746f1b37c491f3707229f264f0f5da:2025-09-08:41335
                    
                

Test fail (1/347)

Order failed test Duration
258 TestScheduledStopUnix 37.07
x
+
TestScheduledStopUnix (37.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-090251 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-090251 --memory=3072 --driver=docker  --container-runtime=docker: (31.963122908s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090251 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-090251 -n scheduled-stop-090251
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090251 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 209469 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-09-08 14:05:38.235490252 +0000 UTC m=+2305.404562900
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-090251
helpers_test.go:243: (dbg) docker inspect scheduled-stop-090251:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d77af57d07af59f38009762436625e7e6b80a645dfe3f16cf16fdb10825bba9b",
	        "Created": "2025-09-08T14:05:10.676204834Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 206605,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T14:05:10.744009856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/d77af57d07af59f38009762436625e7e6b80a645dfe3f16cf16fdb10825bba9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d77af57d07af59f38009762436625e7e6b80a645dfe3f16cf16fdb10825bba9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d77af57d07af59f38009762436625e7e6b80a645dfe3f16cf16fdb10825bba9b/hosts",
	        "LogPath": "/var/lib/docker/containers/d77af57d07af59f38009762436625e7e6b80a645dfe3f16cf16fdb10825bba9b/d77af57d07af59f38009762436625e7e6b80a645dfe3f16cf16fdb10825bba9b-json.log",
	        "Name": "/scheduled-stop-090251",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-090251:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-090251",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d77af57d07af59f38009762436625e7e6b80a645dfe3f16cf16fdb10825bba9b",
	                "LowerDir": "/var/lib/docker/overlay2/ef1b0576c9b0880eb09ee1261c411deb854905f94f35e9cad2a947aca72b561b-init/diff:/var/lib/docker/overlay2/570c170e295ff2789664398ebc60cb792c7d3e094959c6d22ed3c06d39e2eff9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ef1b0576c9b0880eb09ee1261c411deb854905f94f35e9cad2a947aca72b561b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ef1b0576c9b0880eb09ee1261c411deb854905f94f35e9cad2a947aca72b561b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ef1b0576c9b0880eb09ee1261c411deb854905f94f35e9cad2a947aca72b561b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-090251",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-090251/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-090251",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-090251",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-090251",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21f146426c2356bf34581a5e605fea55a9416ad29b36268289c18e5e293b5ca0",
	            "SandboxKey": "/var/run/docker/netns/21f146426c23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-090251": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:a5:4c:4b:d3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e8719ec6d49fac0e23164632cc37196a6ace8f604b259e066a87587e861f23d7",
	                    "EndpointID": "bc8b35bbadcc81bfac8739383710d1e6e92c66b130e53cd3dffe16ecbea2ec79",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-090251",
	                        "d77af57d07af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-090251 -n scheduled-stop-090251
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-090251 logs -n 25
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-025632                                                                                                                                         │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 13:59 UTC │ 08 Sep 25 14:00 UTC │
	│ start   │ -p multinode-025632 --wait=true -v=5 --alsologtostderr                                                                                                      │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:00 UTC │ 08 Sep 25 14:01 UTC │
	│ node    │ list -p multinode-025632                                                                                                                                    │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:01 UTC │                     │
	│ node    │ multinode-025632 node delete m03                                                                                                                            │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:01 UTC │ 08 Sep 25 14:01 UTC │
	│ stop    │ multinode-025632 stop                                                                                                                                       │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:01 UTC │ 08 Sep 25 14:01 UTC │
	│ start   │ -p multinode-025632 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker                                                          │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:01 UTC │ 08 Sep 25 14:02 UTC │
	│ node    │ list -p multinode-025632                                                                                                                                    │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ start   │ -p multinode-025632-m02 --driver=docker  --container-runtime=docker                                                                                         │ multinode-025632-m02  │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ start   │ -p multinode-025632-m03 --driver=docker  --container-runtime=docker                                                                                         │ multinode-025632-m03  │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │ 08 Sep 25 14:02 UTC │
	│ node    │ add -p multinode-025632                                                                                                                                     │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │                     │
	│ delete  │ -p multinode-025632-m03                                                                                                                                     │ multinode-025632-m03  │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │ 08 Sep 25 14:02 UTC │
	│ delete  │ -p multinode-025632                                                                                                                                         │ multinode-025632      │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │ 08 Sep 25 14:03 UTC │
	│ start   │ -p test-preload-350973 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0 │ test-preload-350973   │ jenkins │ v1.36.0 │ 08 Sep 25 14:03 UTC │ 08 Sep 25 14:03 UTC │
	│ image   │ test-preload-350973 image pull gcr.io/k8s-minikube/busybox                                                                                                  │ test-preload-350973   │ jenkins │ v1.36.0 │ 08 Sep 25 14:03 UTC │ 08 Sep 25 14:03 UTC │
	│ stop    │ -p test-preload-350973                                                                                                                                      │ test-preload-350973   │ jenkins │ v1.36.0 │ 08 Sep 25 14:03 UTC │ 08 Sep 25 14:04 UTC │
	│ start   │ -p test-preload-350973 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker                                         │ test-preload-350973   │ jenkins │ v1.36.0 │ 08 Sep 25 14:04 UTC │ 08 Sep 25 14:05 UTC │
	│ image   │ test-preload-350973 image list                                                                                                                              │ test-preload-350973   │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │ 08 Sep 25 14:05 UTC │
	│ delete  │ -p test-preload-350973                                                                                                                                      │ test-preload-350973   │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │ 08 Sep 25 14:05 UTC │
	│ start   │ -p scheduled-stop-090251 --memory=3072 --driver=docker  --container-runtime=docker                                                                          │ scheduled-stop-090251 │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │ 08 Sep 25 14:05 UTC │
	│ stop    │ -p scheduled-stop-090251 --schedule 5m                                                                                                                      │ scheduled-stop-090251 │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │                     │
	│ stop    │ -p scheduled-stop-090251 --schedule 5m                                                                                                                      │ scheduled-stop-090251 │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │                     │
	│ stop    │ -p scheduled-stop-090251 --schedule 5m                                                                                                                      │ scheduled-stop-090251 │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │                     │
	│ stop    │ -p scheduled-stop-090251 --schedule 15s                                                                                                                     │ scheduled-stop-090251 │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │                     │
	│ stop    │ -p scheduled-stop-090251 --schedule 15s                                                                                                                     │ scheduled-stop-090251 │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │                     │
	│ stop    │ -p scheduled-stop-090251 --schedule 15s                                                                                                                     │ scheduled-stop-090251 │ jenkins │ v1.36.0 │ 08 Sep 25 14:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 14:05:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 14:05:05.768051  206209 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:05:05.768165  206209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:05:05.768169  206209 out.go:374] Setting ErrFile to fd 2...
	I0908 14:05:05.768173  206209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:05:05.768445  206209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 14:05:05.769086  206209 out.go:368] Setting JSON to false
	I0908 14:05:05.769919  206209 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":2854,"bootTime":1757337452,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0908 14:05:05.769974  206209 start.go:140] virtualization:  
	I0908 14:05:05.773772  206209 out.go:179] * [scheduled-stop-090251] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 14:05:05.778220  206209 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 14:05:05.778390  206209 notify.go:220] Checking for updates...
	I0908 14:05:05.784855  206209 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:05:05.788081  206209 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	I0908 14:05:05.791182  206209 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	I0908 14:05:05.794325  206209 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:05:05.797428  206209 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:05:05.800621  206209 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:05:05.834503  206209 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:05:05.834643  206209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:05:05.893353  206209 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-08 14:05:05.883708602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:05:05.893480  206209 docker.go:318] overlay module found
	I0908 14:05:05.896803  206209 out.go:179] * Using the docker driver based on user configuration
	I0908 14:05:05.899742  206209 start.go:304] selected driver: docker
	I0908 14:05:05.899760  206209 start.go:918] validating driver "docker" against <nil>
	I0908 14:05:05.899773  206209 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:05:05.900506  206209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:05:05.955090  206209 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-08 14:05:05.944858468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:05:05.955230  206209 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 14:05:05.955457  206209 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 14:05:05.958448  206209 out.go:179] * Using Docker driver with root privileges
	I0908 14:05:05.961431  206209 cni.go:84] Creating CNI manager for ""
	I0908 14:05:05.961491  206209 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 14:05:05.961498  206209 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 14:05:05.961584  206209 start.go:348] cluster config:
	{Name:scheduled-stop-090251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-090251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:05:05.966689  206209 out.go:179] * Starting "scheduled-stop-090251" primary control-plane node in "scheduled-stop-090251" cluster
	I0908 14:05:05.969556  206209 cache.go:123] Beginning downloading kic base image for docker with docker
	I0908 14:05:05.972609  206209 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 14:05:05.975569  206209 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 14:05:05.975618  206209 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0908 14:05:05.975626  206209 cache.go:58] Caching tarball of preloaded images
	I0908 14:05:05.975671  206209 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 14:05:05.975727  206209 preload.go:172] Found /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 14:05:05.975736  206209 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 14:05:05.976071  206209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/config.json ...
	I0908 14:05:05.976088  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/config.json: {Name:mkdf4bb1a5f04d2861b05fbc9b986be2c4d77f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:05.995681  206209 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 14:05:05.995695  206209 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 14:05:05.995707  206209 cache.go:232] Successfully downloaded all kic artifacts
	I0908 14:05:05.995740  206209 start.go:360] acquireMachinesLock for scheduled-stop-090251: {Name:mkfd5cdcc53046b1b45b6a24241c42f463801d72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 14:05:05.995839  206209 start.go:364] duration metric: took 85.513µs to acquireMachinesLock for "scheduled-stop-090251"
	I0908 14:05:05.995862  206209 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-090251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-090251 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 14:05:05.995930  206209 start.go:125] createHost starting for "" (driver="docker")
	I0908 14:05:05.999495  206209 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0908 14:05:05.999739  206209 start.go:159] libmachine.API.Create for "scheduled-stop-090251" (driver="docker")
	I0908 14:05:05.999773  206209 client.go:168] LocalClient.Create starting
	I0908 14:05:05.999841  206209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca.pem
	I0908 14:05:05.999874  206209 main.go:141] libmachine: Decoding PEM data...
	I0908 14:05:05.999889  206209 main.go:141] libmachine: Parsing certificate...
	I0908 14:05:05.999939  206209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-2320/.minikube/certs/cert.pem
	I0908 14:05:05.999963  206209 main.go:141] libmachine: Decoding PEM data...
	I0908 14:05:05.999972  206209 main.go:141] libmachine: Parsing certificate...
	I0908 14:05:06.000344  206209 cli_runner.go:164] Run: docker network inspect scheduled-stop-090251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 14:05:06.018493  206209 cli_runner.go:211] docker network inspect scheduled-stop-090251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 14:05:06.018563  206209 network_create.go:284] running [docker network inspect scheduled-stop-090251] to gather additional debugging logs...
	I0908 14:05:06.018578  206209 cli_runner.go:164] Run: docker network inspect scheduled-stop-090251
	W0908 14:05:06.049052  206209 cli_runner.go:211] docker network inspect scheduled-stop-090251 returned with exit code 1
	I0908 14:05:06.049074  206209 network_create.go:287] error running [docker network inspect scheduled-stop-090251]: docker network inspect scheduled-stop-090251: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-090251 not found
	I0908 14:05:06.049085  206209 network_create.go:289] output of [docker network inspect scheduled-stop-090251]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-090251 not found
	
	** /stderr **
	I0908 14:05:06.049219  206209 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:05:06.067243  206209 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-128e80606eed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:f3:36:ea:cc:a6} reservation:<nil>}
	I0908 14:05:06.067485  206209 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-43116294d4d2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ee:2d:cc:c9:e9:fc} reservation:<nil>}
	I0908 14:05:06.067742  206209 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2707d6e37252 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:af:7b:c1:17:e8} reservation:<nil>}
	I0908 14:05:06.068064  206209 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2e290}
	I0908 14:05:06.068083  206209 network_create.go:124] attempt to create docker network scheduled-stop-090251 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0908 14:05:06.068137  206209 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-090251 scheduled-stop-090251
	I0908 14:05:06.129590  206209 network_create.go:108] docker network scheduled-stop-090251 192.168.76.0/24 created
	I0908 14:05:06.129612  206209 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-090251" container
	I0908 14:05:06.129695  206209 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 14:05:06.145985  206209 cli_runner.go:164] Run: docker volume create scheduled-stop-090251 --label name.minikube.sigs.k8s.io=scheduled-stop-090251 --label created_by.minikube.sigs.k8s.io=true
	I0908 14:05:06.164543  206209 oci.go:103] Successfully created a docker volume scheduled-stop-090251
	I0908 14:05:06.164633  206209 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-090251-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-090251 --entrypoint /usr/bin/test -v scheduled-stop-090251:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 14:05:06.722344  206209 oci.go:107] Successfully prepared a docker volume scheduled-stop-090251
	I0908 14:05:06.722380  206209 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 14:05:06.722411  206209 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 14:05:06.722475  206209 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-090251:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 14:05:10.609158  206209 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-090251:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (3.886648123s)
	I0908 14:05:10.609179  206209 kic.go:203] duration metric: took 3.886765471s to extract preloaded images to volume ...
	W0908 14:05:10.609310  206209 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 14:05:10.609423  206209 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 14:05:10.661425  206209 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-090251 --name scheduled-stop-090251 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-090251 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-090251 --network scheduled-stop-090251 --ip 192.168.76.2 --volume scheduled-stop-090251:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 14:05:10.956989  206209 cli_runner.go:164] Run: docker container inspect scheduled-stop-090251 --format={{.State.Running}}
	I0908 14:05:10.990546  206209 cli_runner.go:164] Run: docker container inspect scheduled-stop-090251 --format={{.State.Status}}
	I0908 14:05:11.015384  206209 cli_runner.go:164] Run: docker exec scheduled-stop-090251 stat /var/lib/dpkg/alternatives/iptables
	I0908 14:05:11.078453  206209 oci.go:144] the created container "scheduled-stop-090251" has a running status.
	I0908 14:05:11.078494  206209 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa...
	I0908 14:05:11.403745  206209 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 14:05:11.425633  206209 cli_runner.go:164] Run: docker container inspect scheduled-stop-090251 --format={{.State.Status}}
	I0908 14:05:11.452020  206209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 14:05:11.452031  206209 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-090251 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 14:05:11.521668  206209 cli_runner.go:164] Run: docker container inspect scheduled-stop-090251 --format={{.State.Status}}
	I0908 14:05:11.548602  206209 machine.go:93] provisionDockerMachine start ...
	I0908 14:05:11.548694  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:11.573588  206209 main.go:141] libmachine: Using SSH client type: native
	I0908 14:05:11.573926  206209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0908 14:05:11.573934  206209 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 14:05:11.746107  206209 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-090251
	
	I0908 14:05:11.746121  206209 ubuntu.go:182] provisioning hostname "scheduled-stop-090251"
	I0908 14:05:11.746190  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:11.767197  206209 main.go:141] libmachine: Using SSH client type: native
	I0908 14:05:11.767490  206209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0908 14:05:11.767499  206209 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-090251 && echo "scheduled-stop-090251" | sudo tee /etc/hostname
	I0908 14:05:11.908412  206209 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-090251
	
	I0908 14:05:11.908498  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:11.931526  206209 main.go:141] libmachine: Using SSH client type: native
	I0908 14:05:11.931835  206209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0908 14:05:11.931850  206209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-090251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-090251/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-090251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 14:05:12.059108  206209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 14:05:12.059124  206209 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21504-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-2320/.minikube}
	I0908 14:05:12.059145  206209 ubuntu.go:190] setting up certificates
	I0908 14:05:12.059154  206209 provision.go:84] configureAuth start
	I0908 14:05:12.059213  206209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-090251
	I0908 14:05:12.083246  206209 provision.go:143] copyHostCerts
	I0908 14:05:12.083307  206209 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-2320/.minikube/ca.pem, removing ...
	I0908 14:05:12.083315  206209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-2320/.minikube/ca.pem
	I0908 14:05:12.083396  206209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-2320/.minikube/ca.pem (1082 bytes)
	I0908 14:05:12.083490  206209 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-2320/.minikube/cert.pem, removing ...
	I0908 14:05:12.083494  206209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-2320/.minikube/cert.pem
	I0908 14:05:12.083518  206209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-2320/.minikube/cert.pem (1123 bytes)
	I0908 14:05:12.083573  206209 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-2320/.minikube/key.pem, removing ...
	I0908 14:05:12.083576  206209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-2320/.minikube/key.pem
	I0908 14:05:12.083597  206209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-2320/.minikube/key.pem (1675 bytes)
	I0908 14:05:12.083648  206209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-090251 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-090251]
	I0908 14:05:13.067924  206209 provision.go:177] copyRemoteCerts
	I0908 14:05:13.067982  206209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 14:05:13.068025  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:13.086081  206209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa Username:docker}
	I0908 14:05:13.179659  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 14:05:13.205899  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0908 14:05:13.231403  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 14:05:13.255847  206209 provision.go:87] duration metric: took 1.196669961s to configureAuth
	I0908 14:05:13.255864  206209 ubuntu.go:206] setting minikube options for container-runtime
	I0908 14:05:13.256061  206209 config.go:182] Loaded profile config "scheduled-stop-090251": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 14:05:13.256114  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:13.273295  206209 main.go:141] libmachine: Using SSH client type: native
	I0908 14:05:13.273602  206209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0908 14:05:13.273609  206209 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 14:05:13.399052  206209 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0908 14:05:13.399064  206209 ubuntu.go:71] root file system type: overlay
	I0908 14:05:13.399167  206209 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 14:05:13.399230  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:13.417135  206209 main.go:141] libmachine: Using SSH client type: native
	I0908 14:05:13.417420  206209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0908 14:05:13.417500  206209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 14:05:13.559210  206209 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 14:05:13.559286  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:13.577953  206209 main.go:141] libmachine: Using SSH client type: native
	I0908 14:05:13.578246  206209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I0908 14:05:13.578261  206209 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 14:05:14.395129  206209 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-08 14:05:13.554681707 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0908 14:05:14.395155  206209 machine.go:96] duration metric: took 2.846540901s to provisionDockerMachine
	I0908 14:05:14.395165  206209 client.go:171] duration metric: took 8.395386436s to LocalClient.Create
	I0908 14:05:14.395183  206209 start.go:167] duration metric: took 8.395445224s to libmachine.API.Create "scheduled-stop-090251"
	I0908 14:05:14.395189  206209 start.go:293] postStartSetup for "scheduled-stop-090251" (driver="docker")
	I0908 14:05:14.395199  206209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 14:05:14.395258  206209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 14:05:14.395304  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:14.412412  206209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa Username:docker}
	I0908 14:05:14.503919  206209 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 14:05:14.507044  206209 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 14:05:14.507067  206209 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 14:05:14.507076  206209 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 14:05:14.507082  206209 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 14:05:14.507091  206209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-2320/.minikube/addons for local assets ...
	I0908 14:05:14.507147  206209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-2320/.minikube/files for local assets ...
	I0908 14:05:14.507228  206209 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-2320/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I0908 14:05:14.507328  206209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 14:05:14.515581  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I0908 14:05:14.539977  206209 start.go:296] duration metric: took 144.77485ms for postStartSetup
	I0908 14:05:14.540351  206209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-090251
	I0908 14:05:14.557241  206209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/config.json ...
	I0908 14:05:14.557501  206209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:05:14.557545  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:14.574147  206209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa Username:docker}
	I0908 14:05:14.659547  206209 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 14:05:14.664115  206209 start.go:128] duration metric: took 8.668172627s to createHost
	I0908 14:05:14.664129  206209 start.go:83] releasing machines lock for "scheduled-stop-090251", held for 8.668283139s
	I0908 14:05:14.664205  206209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-090251
	I0908 14:05:14.681453  206209 ssh_runner.go:195] Run: cat /version.json
	I0908 14:05:14.681489  206209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 14:05:14.681494  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:14.681557  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:14.704835  206209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa Username:docker}
	I0908 14:05:14.717026  206209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa Username:docker}
	I0908 14:05:14.790897  206209 ssh_runner.go:195] Run: systemctl --version
	I0908 14:05:14.918254  206209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 14:05:14.922545  206209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 14:05:14.948297  206209 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 14:05:14.948372  206209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 14:05:14.979671  206209 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 14:05:14.979688  206209 start.go:495] detecting cgroup driver to use...
	I0908 14:05:14.979719  206209 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 14:05:14.979813  206209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:05:14.996027  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 14:05:15.006455  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 14:05:15.016646  206209 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 14:05:15.016718  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 14:05:15.040279  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:05:15.060612  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 14:05:15.081178  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 14:05:15.092218  206209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 14:05:15.102254  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 14:05:15.113459  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 14:05:15.124146  206209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 14:05:15.135690  206209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 14:05:15.145002  206209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 14:05:15.154050  206209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:05:15.247148  206209 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 14:05:15.336485  206209 start.go:495] detecting cgroup driver to use...
	I0908 14:05:15.336522  206209 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 14:05:15.336580  206209 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 14:05:15.350894  206209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:05:15.363509  206209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 14:05:15.397074  206209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 14:05:15.408777  206209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 14:05:15.421398  206209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 14:05:15.438304  206209 ssh_runner.go:195] Run: which cri-dockerd
	I0908 14:05:15.441806  206209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 14:05:15.450662  206209 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 14:05:15.469029  206209 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 14:05:15.564603  206209 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 14:05:15.652162  206209 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 14:05:15.652245  206209 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 14:05:15.670723  206209 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 14:05:15.682684  206209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:05:15.775268  206209 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 14:05:16.161576  206209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 14:05:16.173839  206209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 14:05:16.186716  206209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 14:05:16.199097  206209 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 14:05:16.294646  206209 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 14:05:16.391593  206209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:05:16.483355  206209 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 14:05:16.497807  206209 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 14:05:16.509621  206209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:05:16.602517  206209 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 14:05:16.677337  206209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 14:05:16.691148  206209 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 14:05:16.691213  206209 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 14:05:16.695206  206209 start.go:563] Will wait 60s for crictl version
	I0908 14:05:16.695260  206209 ssh_runner.go:195] Run: which crictl
	I0908 14:05:16.698861  206209 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 14:05:16.747539  206209 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 14:05:16.747595  206209 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 14:05:16.769651  206209 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 14:05:16.795492  206209 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 14:05:16.795576  206209 cli_runner.go:164] Run: docker network inspect scheduled-stop-090251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 14:05:16.811506  206209 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 14:05:16.815195  206209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:05:16.826066  206209 kubeadm.go:875] updating cluster {Name:scheduled-stop-090251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-090251 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 14:05:16.826166  206209 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 14:05:16.826218  206209 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 14:05:16.844557  206209 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0908 14:05:16.844571  206209 docker.go:621] Images already preloaded, skipping extraction
	I0908 14:05:16.844639  206209 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 14:05:16.861616  206209 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0908 14:05:16.861629  206209 cache_images.go:85] Images are preloaded, skipping loading
	I0908 14:05:16.861640  206209 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 docker true true} ...
	I0908 14:05:16.861729  206209 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-090251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-090251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 14:05:16.861796  206209 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 14:05:16.910230  206209 cni.go:84] Creating CNI manager for ""
	I0908 14:05:16.910253  206209 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 14:05:16.910264  206209 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 14:05:16.910284  206209 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-090251 NodeName:scheduled-stop-090251 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 14:05:16.910424  206209 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "scheduled-stop-090251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 14:05:16.910492  206209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 14:05:16.919577  206209 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 14:05:16.919637  206209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 14:05:16.928447  206209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0908 14:05:16.947245  206209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 14:05:16.965590  206209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0908 14:05:16.984781  206209 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 14:05:16.988512  206209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 14:05:16.999993  206209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:05:17.096017  206209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:05:17.110290  206209 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251 for IP: 192.168.76.2
	I0908 14:05:17.110311  206209 certs.go:194] generating shared ca certs ...
	I0908 14:05:17.110326  206209 certs.go:226] acquiring lock for ca certs: {Name:mk0021cd008d807b29f57862e5444612344fe341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:17.110508  206209 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-2320/.minikube/ca.key
	I0908 14:05:17.110556  206209 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-2320/.minikube/proxy-client-ca.key
	I0908 14:05:17.110562  206209 certs.go:256] generating profile certs ...
	I0908 14:05:17.110625  206209 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/client.key
	I0908 14:05:17.110636  206209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/client.crt with IP's: []
	I0908 14:05:17.720157  206209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/client.crt ...
	I0908 14:05:17.720174  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/client.crt: {Name:mke2435ce5f36c7a5f690b94048ce62a8a6a9d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:17.720391  206209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/client.key ...
	I0908 14:05:17.720400  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/client.key: {Name:mkba7b9cc3132cfd3727ac0cb363fb13635fd333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:17.720493  206209 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.key.e19f5597
	I0908 14:05:17.720506  206209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.crt.e19f5597 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0908 14:05:18.382673  206209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.crt.e19f5597 ...
	I0908 14:05:18.382690  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.crt.e19f5597: {Name:mk9e10f10b87349f96e758cb03a902117d82e002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:18.382901  206209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.key.e19f5597 ...
	I0908 14:05:18.382910  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.key.e19f5597: {Name:mke1ea6745c6c601fb2c2203d4a511fcbf61234f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:18.383007  206209 certs.go:381] copying /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.crt.e19f5597 -> /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.crt
	I0908 14:05:18.383083  206209 certs.go:385] copying /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.key.e19f5597 -> /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.key
	I0908 14:05:18.383139  206209 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.key
	I0908 14:05:18.383150  206209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.crt with IP's: []
	I0908 14:05:18.985131  206209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.crt ...
	I0908 14:05:18.985146  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.crt: {Name:mkd80fa368b5af97ab6f603dcce4362a79ca3103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:18.985340  206209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.key ...
	I0908 14:05:18.985347  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.key: {Name:mkeddac85ddc295406d8cbeaf8c4880a9c78b6b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:18.985524  206209 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/4120.pem (1338 bytes)
	W0908 14:05:18.985558  206209 certs.go:480] ignoring /home/jenkins/minikube-integration/21504-2320/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I0908 14:05:18.985565  206209 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 14:05:18.985588  206209 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/ca.pem (1082 bytes)
	I0908 14:05:18.985612  206209 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/cert.pem (1123 bytes)
	I0908 14:05:18.985632  206209 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2320/.minikube/certs/key.pem (1675 bytes)
	I0908 14:05:18.985674  206209 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-2320/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I0908 14:05:18.986264  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 14:05:19.013665  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 14:05:19.041287  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 14:05:19.066701  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 14:05:19.091989  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0908 14:05:19.117317  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 14:05:19.143786  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 14:05:19.169406  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/scheduled-stop-090251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 14:05:19.194549  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I0908 14:05:19.219926  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 14:05:19.244889  206209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-2320/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I0908 14:05:19.270157  206209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 14:05:19.289026  206209 ssh_runner.go:195] Run: openssl version
	I0908 14:05:19.295589  206209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 14:05:19.306029  206209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:05:19.310708  206209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:28 /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:05:19.310788  206209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 14:05:19.318736  206209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 14:05:19.328657  206209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4120.pem && ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem"
	I0908 14:05:19.338565  206209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I0908 14:05:19.342238  206209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:34 /usr/share/ca-certificates/4120.pem
	I0908 14:05:19.342302  206209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I0908 14:05:19.350041  206209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0"
	I0908 14:05:19.359801  206209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41202.pem && ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem"
	I0908 14:05:19.369955  206209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I0908 14:05:19.378602  206209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:34 /usr/share/ca-certificates/41202.pem
	I0908 14:05:19.378660  206209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I0908 14:05:19.386148  206209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 14:05:19.396118  206209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 14:05:19.400383  206209 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 14:05:19.400425  206209 kubeadm.go:392] StartCluster: {Name:scheduled-stop-090251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-090251 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:05:19.400530  206209 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 14:05:19.424834  206209 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 14:05:19.439170  206209 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 14:05:19.448469  206209 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 14:05:19.448526  206209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 14:05:19.457429  206209 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 14:05:19.457438  206209 kubeadm.go:157] found existing configuration files:
	
	I0908 14:05:19.457488  206209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 14:05:19.467170  206209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 14:05:19.467225  206209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 14:05:19.476349  206209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 14:05:19.485308  206209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 14:05:19.485365  206209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 14:05:19.494656  206209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 14:05:19.504008  206209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 14:05:19.504080  206209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 14:05:19.512743  206209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 14:05:19.522140  206209 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 14:05:19.522198  206209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 14:05:19.531264  206209 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 14:05:19.575373  206209 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 14:05:19.575571  206209 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 14:05:19.596778  206209 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 14:05:19.596844  206209 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 14:05:19.596879  206209 kubeadm.go:310] OS: Linux
	I0908 14:05:19.596924  206209 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 14:05:19.596972  206209 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 14:05:19.597020  206209 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 14:05:19.597068  206209 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 14:05:19.597116  206209 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 14:05:19.597174  206209 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 14:05:19.597219  206209 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 14:05:19.597268  206209 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 14:05:19.597315  206209 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 14:05:19.663587  206209 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 14:05:19.663691  206209 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 14:05:19.663781  206209 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 14:05:19.682400  206209 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 14:05:19.689334  206209 out.go:252]   - Generating certificates and keys ...
	I0908 14:05:19.689424  206209 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 14:05:19.689488  206209 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 14:05:20.879092  206209 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 14:05:21.153566  206209 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 14:05:21.542985  206209 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 14:05:22.551967  206209 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 14:05:23.331151  206209 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 14:05:23.331295  206209 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-090251] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 14:05:23.832518  206209 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 14:05:23.832826  206209 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-090251] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 14:05:24.084919  206209 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 14:05:24.585887  206209 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 14:05:24.680438  206209 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 14:05:24.680565  206209 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 14:05:25.041455  206209 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 14:05:25.810223  206209 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 14:05:25.981768  206209 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 14:05:26.620759  206209 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 14:05:27.116685  206209 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 14:05:27.117451  206209 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 14:05:27.120120  206209 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 14:05:27.123576  206209 out.go:252]   - Booting up control plane ...
	I0908 14:05:27.123717  206209 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 14:05:27.123814  206209 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 14:05:27.123894  206209 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 14:05:27.137903  206209 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 14:05:27.138006  206209 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 14:05:27.145181  206209 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 14:05:27.145456  206209 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 14:05:27.145500  206209 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 14:05:27.251252  206209 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 14:05:27.251366  206209 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 14:05:28.251060  206209 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00169163s
	I0908 14:05:28.254605  206209 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 14:05:28.254694  206209 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0908 14:05:28.254833  206209 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 14:05:28.254927  206209 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 14:05:32.051395  206209 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.795784703s
	I0908 14:05:33.214181  206209 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.959554331s
	I0908 14:05:34.757348  206209 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.502174073s
	I0908 14:05:34.778715  206209 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 14:05:34.792996  206209 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 14:05:34.809535  206209 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 14:05:34.809767  206209 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-090251 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 14:05:34.822862  206209 kubeadm.go:310] [bootstrap-token] Using token: usvw21.x5azmvbxk8j4zltx
	I0908 14:05:34.825630  206209 out.go:252]   - Configuring RBAC rules ...
	I0908 14:05:34.825747  206209 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 14:05:34.833866  206209 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 14:05:34.845044  206209 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 14:05:34.853608  206209 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 14:05:34.858915  206209 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 14:05:34.864703  206209 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 14:05:35.170119  206209 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 14:05:35.600051  206209 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 14:05:36.167194  206209 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 14:05:36.168492  206209 kubeadm.go:310] 
	I0908 14:05:36.168561  206209 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 14:05:36.168565  206209 kubeadm.go:310] 
	I0908 14:05:36.168642  206209 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 14:05:36.168645  206209 kubeadm.go:310] 
	I0908 14:05:36.168670  206209 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 14:05:36.168815  206209 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 14:05:36.168873  206209 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 14:05:36.168876  206209 kubeadm.go:310] 
	I0908 14:05:36.168930  206209 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 14:05:36.168933  206209 kubeadm.go:310] 
	I0908 14:05:36.168980  206209 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 14:05:36.168984  206209 kubeadm.go:310] 
	I0908 14:05:36.169035  206209 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 14:05:36.169110  206209 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 14:05:36.169178  206209 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 14:05:36.169181  206209 kubeadm.go:310] 
	I0908 14:05:36.169265  206209 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 14:05:36.169342  206209 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 14:05:36.169345  206209 kubeadm.go:310] 
	I0908 14:05:36.169429  206209 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token usvw21.x5azmvbxk8j4zltx \
	I0908 14:05:36.169533  206209 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7412d166a96d82f93918d03ec09dbc9aa58e761d95c1602d1d02b657bf9086f8 \
	I0908 14:05:36.169553  206209 kubeadm.go:310] 	--control-plane 
	I0908 14:05:36.169557  206209 kubeadm.go:310] 
	I0908 14:05:36.169642  206209 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 14:05:36.169645  206209 kubeadm.go:310] 
	I0908 14:05:36.169727  206209 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token usvw21.x5azmvbxk8j4zltx \
	I0908 14:05:36.169829  206209 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7412d166a96d82f93918d03ec09dbc9aa58e761d95c1602d1d02b657bf9086f8 
	I0908 14:05:36.174978  206209 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 14:05:36.175198  206209 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 14:05:36.175302  206209 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 14:05:36.175319  206209 cni.go:84] Creating CNI manager for ""
	I0908 14:05:36.175331  206209 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 14:05:36.178447  206209 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 14:05:36.181236  206209 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 14:05:36.190148  206209 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 14:05:36.208718  206209 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 14:05:36.208794  206209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 14:05:36.208827  206209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-090251 minikube.k8s.io/updated_at=2025_09_08T14_05_36_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6 minikube.k8s.io/name=scheduled-stop-090251 minikube.k8s.io/primary=true
	I0908 14:05:36.225442  206209 ops.go:34] apiserver oom_adj: -16
	I0908 14:05:36.327757  206209 kubeadm.go:1105] duration metric: took 119.034105ms to wait for elevateKubeSystemPrivileges
	I0908 14:05:36.356269  206209 kubeadm.go:394] duration metric: took 16.95583859s to StartCluster
	I0908 14:05:36.356292  206209 settings.go:142] acquiring lock: {Name:mk6466197568b454a152c74d528145484fbc55b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:36.356358  206209 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-2320/kubeconfig
	I0908 14:05:36.357032  206209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/kubeconfig: {Name:mk789dd53c90d53c14c1d24f2d2a926103526048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 14:05:36.357253  206209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 14:05:36.357269  206209 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 14:05:36.357601  206209 config.go:182] Loaded profile config "scheduled-stop-090251": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 14:05:36.357640  206209 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 14:05:36.357713  206209 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-090251"
	I0908 14:05:36.357720  206209 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-090251"
	I0908 14:05:36.357726  206209 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-090251"
	I0908 14:05:36.357734  206209 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-090251"
	I0908 14:05:36.357747  206209 host.go:66] Checking if "scheduled-stop-090251" exists ...
	I0908 14:05:36.358097  206209 cli_runner.go:164] Run: docker container inspect scheduled-stop-090251 --format={{.State.Status}}
	I0908 14:05:36.358241  206209 cli_runner.go:164] Run: docker container inspect scheduled-stop-090251 --format={{.State.Status}}
	I0908 14:05:36.363276  206209 out.go:179] * Verifying Kubernetes components...
	I0908 14:05:36.366908  206209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 14:05:36.398153  206209 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-090251"
	I0908 14:05:36.398180  206209 host.go:66] Checking if "scheduled-stop-090251" exists ...
	I0908 14:05:36.398623  206209 cli_runner.go:164] Run: docker container inspect scheduled-stop-090251 --format={{.State.Status}}
	I0908 14:05:36.406003  206209 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 14:05:36.408850  206209 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:05:36.408861  206209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 14:05:36.408946  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:36.438645  206209 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 14:05:36.438657  206209 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 14:05:36.438729  206209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-090251
	I0908 14:05:36.460014  206209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa Username:docker}
	I0908 14:05:36.472356  206209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/scheduled-stop-090251/id_rsa Username:docker}
	I0908 14:05:36.600023  206209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 14:05:36.600114  206209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 14:05:36.772678  206209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 14:05:36.781339  206209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 14:05:37.088633  206209 api_server.go:52] waiting for apiserver process to appear ...
	I0908 14:05:37.088693  206209 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:05:37.088800  206209 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0908 14:05:37.350492  206209 api_server.go:72] duration metric: took 993.200014ms to wait for apiserver process to appear ...
	I0908 14:05:37.350503  206209 api_server.go:88] waiting for apiserver healthz status ...
	I0908 14:05:37.350518  206209 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 14:05:37.362926  206209 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 14:05:37.365682  206209 api_server.go:141] control plane version: v1.34.0
	I0908 14:05:37.365699  206209 api_server.go:131] duration metric: took 15.190209ms to wait for apiserver health ...
	I0908 14:05:37.365706  206209 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 14:05:37.369754  206209 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 14:05:37.371065  206209 system_pods.go:59] 5 kube-system pods found
	I0908 14:05:37.371088  206209 system_pods.go:61] "etcd-scheduled-stop-090251" [f314ed1c-8434-47bd-97a2-7306a37b907c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 14:05:37.371095  206209 system_pods.go:61] "kube-apiserver-scheduled-stop-090251" [636679bd-6a19-492c-9adb-07621a8057d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 14:05:37.371102  206209 system_pods.go:61] "kube-controller-manager-scheduled-stop-090251" [f745426e-be5c-41f2-932d-70a63d628d1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 14:05:37.371107  206209 system_pods.go:61] "kube-scheduler-scheduled-stop-090251" [488b6e6b-8390-47e6-aeb6-470f0946f73a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 14:05:37.371117  206209 system_pods.go:61] "storage-provisioner" [e2fc7477-2fb4-4f90-8ed3-967c5e40ca66] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0908 14:05:37.371122  206209 system_pods.go:74] duration metric: took 5.410927ms to wait for pod list to return data ...
	I0908 14:05:37.371132  206209 kubeadm.go:578] duration metric: took 1.01384421s to wait for: map[apiserver:true system_pods:true]
	I0908 14:05:37.371143  206209 node_conditions.go:102] verifying NodePressure condition ...
	I0908 14:05:37.372643  206209 addons.go:514] duration metric: took 1.014985398s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 14:05:37.374030  206209 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 14:05:37.374048  206209 node_conditions.go:123] node cpu capacity is 2
	I0908 14:05:37.374063  206209 node_conditions.go:105] duration metric: took 2.916239ms to run NodePressure ...
	I0908 14:05:37.374075  206209 start.go:241] waiting for startup goroutines ...
	I0908 14:05:37.592892  206209 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-090251" context rescaled to 1 replicas
	I0908 14:05:37.592914  206209 start.go:246] waiting for cluster config update ...
	I0908 14:05:37.592936  206209 start.go:255] writing updated cluster config ...
	I0908 14:05:37.593217  206209 ssh_runner.go:195] Run: rm -f paused
	I0908 14:05:37.653839  206209 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 14:05:37.657156  206209 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-090251" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.124129067Z" level=info msg="Loading containers: done."
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.134855754Z" level=info msg="Docker daemon" commit=249d679 containerd-snapshotter=false storage-driver=overlay2 version=28.4.0
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.135062103Z" level=info msg="Initializing buildkit"
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.150970672Z" level=info msg="Completed buildkit initialization"
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.158544818Z" level=info msg="Daemon has completed initialization"
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.158642138Z" level=info msg="API listen on /run/docker.sock"
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.158816092Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 08 14:05:16 scheduled-stop-090251 dockerd[1185]: time="2025-09-08T14:05:16.158940521Z" level=info msg="API listen on [::]:2376"
	Sep 08 14:05:16 scheduled-stop-090251 systemd[1]: Started Docker Application Container Engine.
	Sep 08 14:05:16 scheduled-stop-090251 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Start docker client with request timeout 0s"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Loaded network plugin cni"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 08 14:05:16 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:16Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 08 14:05:16 scheduled-stop-090251 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 08 14:05:28 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1fb1de0a74c1a2b6985b4fb9b0e567f799396703066fe817491839e3af8bde8/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 08 14:05:28 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e89fa36ba0c7006de8be12de69ae7efa119759812fabec5eaa6a80a7674561f6/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 08 14:05:28 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ab11ddfac51692bd825918837941486179091711a250cc70314d6c6001f1490/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options ndots:0 edns0 trust-ad]"
	Sep 08 14:05:28 scheduled-stop-090251 cri-dockerd[1482]: time="2025-09-08T14:05:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1fd5eff6da81b7a03da2aba3eb8ed3f0e0e446f6a06d516051ef557042acd4e/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e90e2091959d1       996be7e86d9b3       11 seconds ago      Running             kube-controller-manager   0                   a1fd5eff6da81       kube-controller-manager-scheduled-stop-090251
	3631e75a66a29       a1894772a478e       11 seconds ago      Running             etcd                      0                   7ab11ddfac516       etcd-scheduled-stop-090251
	588f57c847642       a25f5ef9c34c3       11 seconds ago      Running             kube-scheduler            0                   e89fa36ba0c70       kube-scheduler-scheduled-stop-090251
	eb3c15f01e993       d291939e99406       11 seconds ago      Running             kube-apiserver            0                   c1fb1de0a74c1       kube-apiserver-scheduled-stop-090251
	
	
	==> describe nodes <==
	Name:               scheduled-stop-090251
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-090251
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=scheduled-stop-090251
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T14_05_36_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 14:05:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-090251
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 14:05:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 14:05:35 +0000   Mon, 08 Sep 2025 14:05:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 14:05:35 +0000   Mon, 08 Sep 2025 14:05:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 14:05:35 +0000   Mon, 08 Sep 2025 14:05:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 14:05:35 +0000   Mon, 08 Sep 2025 14:05:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-090251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 37e32435ecd1479d84a62bfcc574b345
	  System UUID:                f4816ee7-6992-4ba8-b5d7-bf5a41c6efba
	  Boot ID:                    bea2a2bf-dfac-4586-ae30-d33ee0d10246
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-090251                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6s
	  kube-system                 kube-apiserver-scheduled-stop-090251             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-090251    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-090251             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From     Message
	  ----     ------                   ----               ----     -------
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet  Node scheduled-stop-090251 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet  Node scheduled-stop-090251 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x7 over 11s)  kubelet  Node scheduled-stop-090251 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11s                kubelet  Updated Node Allocatable limit across pods
	  Normal   Starting                 4s                 kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s                 kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4s                 kubelet  Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-090251 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-090251 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-090251 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Sep 8 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014241] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.506765] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034639] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.800626] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.784965] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [3631e75a66a2] <==
	{"level":"warn","ts":"2025-09-08T14:05:31.124021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.145387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.162667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.202671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.208962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.227504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.240246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.258246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.277808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.316198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.340135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.380682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.406485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.415915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.421871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.458041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.500328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.514861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.528866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.546097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.607665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.633106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.656935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.685650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T14:05:31.862642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:05:39 up 48 min,  0 users,  load average: 2.61, 2.71, 2.89
	Linux scheduled-stop-090251 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [eb3c15f01e99] <==
	I0908 14:05:32.848918       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0908 14:05:32.849015       1 policy_source.go:240] refreshing policies
	I0908 14:05:32.858170       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0908 14:05:32.858384       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0908 14:05:32.858653       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I0908 14:05:32.864084       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I0908 14:05:32.864085       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 14:05:32.872206       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0908 14:05:32.874689       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0908 14:05:32.881278       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0908 14:05:32.905084       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 14:05:33.101084       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 14:05:33.637373       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0908 14:05:33.642850       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0908 14:05:33.642948       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 14:05:34.412247       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 14:05:34.467601       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 14:05:34.548981       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0908 14:05:34.557263       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0908 14:05:34.558548       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 14:05:34.564153       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 14:05:34.722312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0908 14:05:35.573577       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 14:05:35.598380       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 14:05:35.610153       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [e90e2091959d] <==
	I0908 14:05:38.268548       1 node_lifecycle_controller.go:453] "Sending events to api server" logger="node-lifecycle-controller"
	I0908 14:05:38.268572       1 node_lifecycle_controller.go:464] "Starting node controller" logger="node-lifecycle-controller"
	I0908 14:05:38.268578       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I0908 14:05:38.420089       1 controllermanager.go:781] "Started controller" controller="persistentvolume-binder-controller"
	I0908 14:05:38.420262       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0908 14:05:38.420277       1 shared_informer.go:349] "Waiting for caches to sync" controller="persistent volume"
	I0908 14:05:38.571072       1 controllermanager.go:781] "Started controller" controller="resourceclaim-controller"
	I0908 14:05:38.571101       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I0908 14:05:38.571150       1 controller.go:397] "Starting resource claim controller" logger="resourceclaim-controller"
	I0908 14:05:38.571190       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource_claim"
	I0908 14:05:38.725654       1 controllermanager.go:781] "Started controller" controller="service-cidr-controller"
	I0908 14:05:38.725826       1 servicecidrs_controller.go:137] "Starting" logger="service-cidr-controller" controller="service-cidr-controller"
	I0908 14:05:38.725837       1 shared_informer.go:349] "Waiting for caches to sync" controller="service-cidr-controller"
	I0908 14:05:38.870814       1 controllermanager.go:781] "Started controller" controller="replicationcontroller-controller"
	I0908 14:05:38.870924       1 replica_set.go:243] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0908 14:05:38.870934       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicationController"
	I0908 14:05:39.020009       1 controllermanager.go:781] "Started controller" controller="daemonset-controller"
	I0908 14:05:39.020173       1 daemon_controller.go:310] "Starting daemon sets controller" logger="daemonset-controller"
	I0908 14:05:39.020186       1 shared_informer.go:349] "Waiting for caches to sync" controller="daemon sets"
	I0908 14:05:39.318641       1 controllermanager.go:781] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0908 14:05:39.318697       1 horizontal.go:205] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0908 14:05:39.318706       1 shared_informer.go:349] "Waiting for caches to sync" controller="HPA"
	I0908 14:05:39.469914       1 controllermanager.go:781] "Started controller" controller="cronjob-controller"
	I0908 14:05:39.470011       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0908 14:05:39.470024       1 shared_informer.go:349] "Waiting for caches to sync" controller="cronjob"
	
	
	==> kube-scheduler [588f57c84764] <==
	I0908 14:05:33.205660       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 14:05:33.205479       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 14:05:33.205509       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 14:05:33.216266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 14:05:33.216594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 14:05:33.222402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 14:05:33.222872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 14:05:33.223143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 14:05:33.223374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 14:05:33.223573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 14:05:33.223771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 14:05:33.223949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 14:05:33.224121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 14:05:33.224298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 14:05:33.224575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0908 14:05:33.224772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 14:05:33.228591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 14:05:33.228892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 14:05:33.229146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 14:05:33.229471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 14:05:33.229759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 14:05:33.229981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 14:05:34.043181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 14:05:34.099315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0908 14:05:34.706839       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906588    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/44abaa4290b67c228c69c43d79fbe80b-etcd-data\") pod \"etcd-scheduled-stop-090251\" (UID: \"44abaa4290b67c228c69c43d79fbe80b\") " pod="kube-system/etcd-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906606    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75ace79cf9a5bb5217509871fcd1cb3-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-090251\" (UID: \"a75ace79cf9a5bb5217509871fcd1cb3\") " pod="kube-system/kube-controller-manager-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906622    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75ace79cf9a5bb5217509871fcd1cb3-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-090251\" (UID: \"a75ace79cf9a5bb5217509871fcd1cb3\") " pod="kube-system/kube-controller-manager-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906638    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c7d48a7794008853495fc4736306752-k8s-certs\") pod \"kube-apiserver-scheduled-stop-090251\" (UID: \"9c7d48a7794008853495fc4736306752\") " pod="kube-system/kube-apiserver-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906655    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c7d48a7794008853495fc4736306752-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-090251\" (UID: \"9c7d48a7794008853495fc4736306752\") " pod="kube-system/kube-apiserver-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906684    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75ace79cf9a5bb5217509871fcd1cb3-ca-certs\") pod \"kube-controller-manager-scheduled-stop-090251\" (UID: \"a75ace79cf9a5bb5217509871fcd1cb3\") " pod="kube-system/kube-controller-manager-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906706    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75ace79cf9a5bb5217509871fcd1cb3-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-090251\" (UID: \"a75ace79cf9a5bb5217509871fcd1cb3\") " pod="kube-system/kube-controller-manager-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906728    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ca8911316dfc033eef34347a8dba1d3-kubeconfig\") pod \"kube-scheduler-scheduled-stop-090251\" (UID: \"3ca8911316dfc033eef34347a8dba1d3\") " pod="kube-system/kube-scheduler-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906744    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/44abaa4290b67c228c69c43d79fbe80b-etcd-certs\") pod \"etcd-scheduled-stop-090251\" (UID: \"44abaa4290b67c228c69c43d79fbe80b\") " pod="kube-system/etcd-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906841    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c7d48a7794008853495fc4736306752-ca-certs\") pod \"kube-apiserver-scheduled-stop-090251\" (UID: \"9c7d48a7794008853495fc4736306752\") " pod="kube-system/kube-apiserver-scheduled-stop-090251"
	Sep 08 14:05:35 scheduled-stop-090251 kubelet[2349]: I0908 14:05:35.906872    2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75ace79cf9a5bb5217509871fcd1cb3-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-090251\" (UID: \"a75ace79cf9a5bb5217509871fcd1cb3\") " pod="kube-system/kube-controller-manager-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.491332    2349 apiserver.go:52] "Watching apiserver"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.600680    2349 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.655746    2349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-090251" podStartSLOduration=3.6557246340000003 podStartE2EDuration="3.655724634s" podCreationTimestamp="2025-09-08 14:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:05:36.641801904 +0000 UTC m=+1.247308390" watchObservedRunningTime="2025-09-08 14:05:36.655724634 +0000 UTC m=+1.261231120"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.661379    2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.661882    2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.662418    2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.662981    2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: E0908 14:05:36.686920    2349 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-090251\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.687039    2349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-090251" podStartSLOduration=1.687023883 podStartE2EDuration="1.687023883s" podCreationTimestamp="2025-09-08 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:05:36.656233575 +0000 UTC m=+1.261740053" watchObservedRunningTime="2025-09-08 14:05:36.687023883 +0000 UTC m=+1.292530361"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: E0908 14:05:36.687183    2349 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-scheduled-stop-090251\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: E0908 14:05:36.687333    2349 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-090251\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: E0908 14:05:36.687421    2349 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-090251\" already exists" pod="kube-system/etcd-scheduled-stop-090251"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.701328    2349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-090251" podStartSLOduration=2.701307666 podStartE2EDuration="2.701307666s" podCreationTimestamp="2025-09-08 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:05:36.699568521 +0000 UTC m=+1.305075007" watchObservedRunningTime="2025-09-08 14:05:36.701307666 +0000 UTC m=+1.306814136"
	Sep 08 14:05:36 scheduled-stop-090251 kubelet[2349]: I0908 14:05:36.701596    2349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-090251" podStartSLOduration=1.701587253 podStartE2EDuration="1.701587253s" podCreationTimestamp="2025-09-08 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 14:05:36.688120698 +0000 UTC m=+1.293627184" watchObservedRunningTime="2025-09-08 14:05:36.701587253 +0000 UTC m=+1.307093739"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-090251 -n scheduled-stop-090251
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-090251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-090251 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-090251 describe pod storage-provisioner: exit status 1 (118.644161ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-090251 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-090251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-090251
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-090251: (2.194673736s)
--- FAIL: TestScheduledStopUnix (37.07s)

                                                
                                    

Test pass (320/347)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 5.53
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.09
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.61
22 TestOffline 56.89
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 156.29
29 TestAddons/serial/Volcano 43.16
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.96
35 TestAddons/parallel/Registry 18.97
36 TestAddons/parallel/RegistryCreds 1.07
37 TestAddons/parallel/Ingress 21.77
38 TestAddons/parallel/InspektorGadget 6.22
39 TestAddons/parallel/MetricsServer 6.2
41 TestAddons/parallel/CSI 47.77
42 TestAddons/parallel/Headlamp 25.81
43 TestAddons/parallel/CloudSpanner 6.52
44 TestAddons/parallel/LocalPath 52.26
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 11.77
48 TestAddons/StoppedEnableDisable 11.09
49 TestCertOptions 42.79
50 TestCertExpiration 263.87
51 TestDockerFlags 42.3
52 TestForceSystemdFlag 44.67
53 TestForceSystemdEnv 46.48
59 TestErrorSpam/setup 34.07
60 TestErrorSpam/start 0.76
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 1.33
63 TestErrorSpam/unpause 1.44
64 TestErrorSpam/stop 2.24
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 73.57
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 50.83
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.92
76 TestFunctional/serial/CacheCmd/cache/add_local 1.11
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 57.32
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.21
87 TestFunctional/serial/LogsFileCmd 1.2
88 TestFunctional/serial/InvalidService 4.78
90 TestFunctional/parallel/ConfigCmd 0.49
91 TestFunctional/parallel/DashboardCmd 13.43
92 TestFunctional/parallel/DryRun 0.48
93 TestFunctional/parallel/InternationalLanguage 0.25
94 TestFunctional/parallel/StatusCmd 1.24
98 TestFunctional/parallel/ServiceCmdConnect 10.85
99 TestFunctional/parallel/AddonsCmd 0.21
100 TestFunctional/parallel/PersistentVolumeClaim 27.12
102 TestFunctional/parallel/SSHCmd 0.72
103 TestFunctional/parallel/CpCmd 2.51
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.22
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
114 TestFunctional/parallel/License 0.35
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.2
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
128 TestFunctional/parallel/ServiceCmd/List 0.56
129 TestFunctional/parallel/ProfileCmd/profile_list 0.56
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
133 TestFunctional/parallel/MountCmd/any-port 9.54
134 TestFunctional/parallel/ServiceCmd/Format 0.47
135 TestFunctional/parallel/ServiceCmd/URL 0.62
136 TestFunctional/parallel/MountCmd/specific-port 1.7
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.43
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.29
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.77
145 TestFunctional/parallel/ImageCommands/Setup 1
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
151 TestFunctional/parallel/DockerEnv/bash 1.34
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 133.69
165 TestMultiControlPlane/serial/DeployApp 44.97
166 TestMultiControlPlane/serial/PingHostFromPods 1.7
167 TestMultiControlPlane/serial/AddWorkerNode 19.96
168 TestMultiControlPlane/serial/NodeLabels 0.15
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.35
170 TestMultiControlPlane/serial/CopyFile 21.38
171 TestMultiControlPlane/serial/StopSecondaryNode 11.75
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
173 TestMultiControlPlane/serial/RestartSecondaryNode 50.99
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.13
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 239.93
176 TestMultiControlPlane/serial/DeleteSecondaryNode 12.24
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
178 TestMultiControlPlane/serial/StopCluster 32.79
179 TestMultiControlPlane/serial/RestartCluster 101.18
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
181 TestMultiControlPlane/serial/AddSecondaryNode 43.59
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.59
185 TestImageBuild/serial/Setup 38.08
186 TestImageBuild/serial/NormalBuild 1.68
187 TestImageBuild/serial/BuildWithBuildArg 0.93
188 TestImageBuild/serial/BuildWithDockerIgnore 0.82
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.02
193 TestJSONOutput/start/Command 48.85
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.59
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.54
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 10.96
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.25
218 TestKicCustomNetwork/create_custom_network 35.32
219 TestKicCustomNetwork/use_default_bridge_network 38.33
220 TestKicExistingNetwork 36.22
221 TestKicCustomSubnet 36.05
222 TestKicStaticIP 34.31
223 TestMainNoArgs 0.06
224 TestMinikubeProfile 79.14
227 TestMountStart/serial/StartWithMountFirst 10.55
228 TestMountStart/serial/VerifyMountFirst 0.25
229 TestMountStart/serial/StartWithMountSecond 7.54
230 TestMountStart/serial/VerifyMountSecond 0.26
231 TestMountStart/serial/DeleteFirst 1.47
232 TestMountStart/serial/VerifyMountPostDelete 0.27
233 TestMountStart/serial/Stop 1.19
234 TestMountStart/serial/RestartStopped 8.87
235 TestMountStart/serial/VerifyMountPostStop 0.25
238 TestMultiNode/serial/FreshStart2Nodes 64.01
239 TestMultiNode/serial/DeployApp2Nodes 48.67
240 TestMultiNode/serial/PingHostFrom2Pods 1.06
241 TestMultiNode/serial/AddNode 16.94
242 TestMultiNode/serial/MultiNodeLabels 0.16
243 TestMultiNode/serial/ProfileList 0.85
244 TestMultiNode/serial/CopyFile 10.73
245 TestMultiNode/serial/StopNode 2.25
246 TestMultiNode/serial/StartAfterStop 10.03
247 TestMultiNode/serial/RestartKeepsNodes 79.23
248 TestMultiNode/serial/DeleteNode 5.64
249 TestMultiNode/serial/StopMultiNode 21.67
250 TestMultiNode/serial/RestartMultiNode 51.85
251 TestMultiNode/serial/ValidateNameConflict 37.65
256 TestPreload 122.48
259 TestSkaffold 138.5
261 TestInsufficientStorage 11.04
262 TestRunningBinaryUpgrade 80.48
264 TestKubernetesUpgrade 372.9
265 TestMissingContainerUpgrade 123.24
267 TestPause/serial/Start 83.61
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
270 TestNoKubernetes/serial/StartWithK8s 40.36
271 TestPause/serial/SecondStartNoReconfiguration 49.68
272 TestNoKubernetes/serial/StartWithStopK8s 17.97
273 TestNoKubernetes/serial/Start 11.08
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
275 TestNoKubernetes/serial/ProfileList 1.15
276 TestNoKubernetes/serial/Stop 1.21
277 TestNoKubernetes/serial/StartNoArgs 8.43
278 TestPause/serial/Pause 0.8
279 TestPause/serial/VerifyStatus 0.4
280 TestPause/serial/Unpause 0.67
281 TestPause/serial/PauseAgain 1
282 TestPause/serial/DeletePaused 2.49
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
284 TestPause/serial/VerifyDeletedResources 0.15
296 TestStoppedBinaryUpgrade/Setup 1.02
297 TestStoppedBinaryUpgrade/Upgrade 77.26
305 TestNetworkPlugins/group/auto/Start 75.12
306 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
307 TestNetworkPlugins/group/kindnet/Start 73.06
308 TestNetworkPlugins/group/auto/KubeletFlags 0.54
309 TestNetworkPlugins/group/auto/NetCatPod 11.54
310 TestNetworkPlugins/group/auto/DNS 0.33
311 TestNetworkPlugins/group/auto/Localhost 0.27
312 TestNetworkPlugins/group/auto/HairPin 0.25
313 TestNetworkPlugins/group/calico/Start 85.47
314 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.49
316 TestNetworkPlugins/group/kindnet/NetCatPod 10.38
317 TestNetworkPlugins/group/kindnet/DNS 0.27
318 TestNetworkPlugins/group/kindnet/Localhost 0.31
319 TestNetworkPlugins/group/kindnet/HairPin 0.24
320 TestNetworkPlugins/group/custom-flannel/Start 74.67
321 TestNetworkPlugins/group/calico/ControllerPod 6.01
322 TestNetworkPlugins/group/calico/KubeletFlags 0.38
323 TestNetworkPlugins/group/calico/NetCatPod 13.45
324 TestNetworkPlugins/group/calico/DNS 0.3
325 TestNetworkPlugins/group/calico/Localhost 0.29
326 TestNetworkPlugins/group/calico/HairPin 0.43
327 TestNetworkPlugins/group/false/Start 86.53
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
330 TestNetworkPlugins/group/custom-flannel/DNS 0.27
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
333 TestNetworkPlugins/group/enable-default-cni/Start 79.81
334 TestNetworkPlugins/group/false/KubeletFlags 0.32
335 TestNetworkPlugins/group/false/NetCatPod 11.4
336 TestNetworkPlugins/group/false/DNS 0.23
337 TestNetworkPlugins/group/false/Localhost 0.16
338 TestNetworkPlugins/group/false/HairPin 0.16
339 TestNetworkPlugins/group/flannel/Start 76.41
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.39
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
345 TestNetworkPlugins/group/bridge/Start 55.33
346 TestNetworkPlugins/group/flannel/ControllerPod 6
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
348 TestNetworkPlugins/group/flannel/NetCatPod 11.41
349 TestNetworkPlugins/group/flannel/DNS 0.37
350 TestNetworkPlugins/group/flannel/Localhost 0.29
351 TestNetworkPlugins/group/flannel/HairPin 0.35
352 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
353 TestNetworkPlugins/group/bridge/NetCatPod 11.38
354 TestNetworkPlugins/group/bridge/DNS 0.28
355 TestNetworkPlugins/group/bridge/Localhost 0.24
356 TestNetworkPlugins/group/bridge/HairPin 0.21
357 TestNetworkPlugins/group/kubenet/Start 80.8
359 TestStartStop/group/old-k8s-version/serial/FirstStart 62.13
360 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
361 TestNetworkPlugins/group/kubenet/NetCatPod 11.29
362 TestStartStop/group/old-k8s-version/serial/DeployApp 10.38
363 TestNetworkPlugins/group/kubenet/DNS 0.2
364 TestNetworkPlugins/group/kubenet/Localhost 0.19
365 TestNetworkPlugins/group/kubenet/HairPin 0.18
366 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.49
367 TestStartStop/group/old-k8s-version/serial/Stop 11.29
368 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
369 TestStartStop/group/old-k8s-version/serial/SecondStart 64.44
371 TestStartStop/group/no-preload/serial/FirstStart 61
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/no-preload/serial/DeployApp 8.37
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
375 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
376 TestStartStop/group/no-preload/serial/Stop 12.3
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
378 TestStartStop/group/old-k8s-version/serial/Pause 3.07
380 TestStartStop/group/embed-certs/serial/FirstStart 53.58
381 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
382 TestStartStop/group/no-preload/serial/SecondStart 61.41
383 TestStartStop/group/embed-certs/serial/DeployApp 10.37
384 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
385 TestStartStop/group/embed-certs/serial/Stop 11.08
386 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
387 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
388 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
389 TestStartStop/group/embed-certs/serial/SecondStart 60.1
390 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
391 TestStartStop/group/no-preload/serial/Pause 3.07
393 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.07
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
397 TestStartStop/group/embed-certs/serial/Pause 2.99
399 TestStartStop/group/newest-cni/serial/FirstStart 43.18
400 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.56
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.63
402 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.13
403 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
404 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 36.35
405 TestStartStop/group/newest-cni/serial/DeployApp 0
406 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.57
407 TestStartStop/group/newest-cni/serial/Stop 9.33
408 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
409 TestStartStop/group/newest-cni/serial/SecondStart 21.38
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
412 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
413 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
414 TestStartStop/group/newest-cni/serial/Pause 3.08
415 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
416 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
417 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
x
+
TestDownloadOnly/v1.28.0/json-events (6.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-359532 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-359532 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.802954607s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 13:27:19.682727    4120 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0908 13:27:19.682828    4120 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-359532
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-359532: exit status 85 (91.521746ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-359532 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-359532 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:27:12
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:27:12.925683    4125 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:27:12.925799    4125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:12.925809    4125 out.go:374] Setting ErrFile to fd 2...
	I0908 13:27:12.925813    4125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:12.926082    4125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	W0908 13:27:12.926224    4125 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21504-2320/.minikube/config/config.json: open /home/jenkins/minikube-integration/21504-2320/.minikube/config/config.json: no such file or directory
	I0908 13:27:12.926621    4125 out.go:368] Setting JSON to true
	I0908 13:27:12.927443    4125 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":581,"bootTime":1757337452,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0908 13:27:12.927513    4125 start.go:140] virtualization:  
	I0908 13:27:12.931805    4125 out.go:99] [download-only-359532] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0908 13:27:12.932022    4125 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 13:27:12.932094    4125 notify.go:220] Checking for updates...
	I0908 13:27:12.934957    4125 out.go:171] MINIKUBE_LOCATION=21504
	I0908 13:27:12.938373    4125 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:27:12.941277    4125 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	I0908 13:27:12.944095    4125 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	I0908 13:27:12.947013    4125 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 13:27:12.952782    4125 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:27:12.953033    4125 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:27:12.983936    4125 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:27:12.984053    4125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:13.395156    4125 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 13:27:13.38549952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:13.395269    4125 docker.go:318] overlay module found
	I0908 13:27:13.398370    4125 out.go:99] Using the docker driver based on user configuration
	I0908 13:27:13.398404    4125 start.go:304] selected driver: docker
	I0908 13:27:13.398425    4125 start.go:918] validating driver "docker" against <nil>
	I0908 13:27:13.398525    4125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:13.460210    4125 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 13:27:13.451462027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:13.460373    4125 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:27:13.460660    4125 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 13:27:13.460828    4125 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:27:13.464080    4125 out.go:171] Using Docker driver with root privileges
	I0908 13:27:13.466929    4125 cni.go:84] Creating CNI manager for ""
	I0908 13:27:13.467012    4125 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 13:27:13.467027    4125 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 13:27:13.467108    4125 start.go:348] cluster config:
	{Name:download-only-359532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-359532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:27:13.470175    4125 out.go:99] Starting "download-only-359532" primary control-plane node in "download-only-359532" cluster
	I0908 13:27:13.470201    4125 cache.go:123] Beginning downloading kic base image for docker with docker
	I0908 13:27:13.473059    4125 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:27:13.473095    4125 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 13:27:13.473247    4125 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:27:13.489183    4125 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:27:13.489378    4125 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:27:13.489478    4125 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:27:13.540368    4125 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0908 13:27:13.540397    4125 cache.go:58] Caching tarball of preloaded images
	I0908 13:27:13.540586    4125 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 13:27:13.543875    4125 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 13:27:13.543914    4125 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	I0908 13:27:13.627692    4125 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-359532 host does not exist
	  To start a cluster, run: "minikube start -p download-only-359532"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-359532
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-380759 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-380759 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.525281353s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 13:27:25.683986    4120 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0908 13:27:25.684025    4120 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-380759
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-380759: exit status 85 (89.425343ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-359532 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-359532 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │ 08 Sep 25 13:27 UTC │
	│ delete  │ -p download-only-359532                                                                                                                                                       │ download-only-359532 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │ 08 Sep 25 13:27 UTC │
	│ start   │ -o=json --download-only -p download-only-380759 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-380759 │ jenkins │ v1.36.0 │ 08 Sep 25 13:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:27:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:27:20.202842    4327 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:27:20.202960    4327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:20.202969    4327 out.go:374] Setting ErrFile to fd 2...
	I0908 13:27:20.202975    4327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:27:20.203243    4327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 13:27:20.203651    4327 out.go:368] Setting JSON to true
	I0908 13:27:20.204399    4327 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":589,"bootTime":1757337452,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0908 13:27:20.204466    4327 start.go:140] virtualization:  
	I0908 13:27:20.207842    4327 out.go:99] [download-only-380759] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:27:20.208131    4327 notify.go:220] Checking for updates...
	I0908 13:27:20.212301    4327 out.go:171] MINIKUBE_LOCATION=21504
	I0908 13:27:20.215338    4327 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:27:20.218469    4327 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	I0908 13:27:20.221320    4327 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	I0908 13:27:20.224319    4327 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 13:27:20.230057    4327 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:27:20.230318    4327 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:27:20.270168    4327 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:27:20.270355    4327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:20.330071    4327 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 13:27:20.320265614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:20.330183    4327 docker.go:318] overlay module found
	I0908 13:27:20.333180    4327 out.go:99] Using the docker driver based on user configuration
	I0908 13:27:20.333223    4327 start.go:304] selected driver: docker
	I0908 13:27:20.333241    4327 start.go:918] validating driver "docker" against <nil>
	I0908 13:27:20.333349    4327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:27:20.403135    4327 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 13:27:20.393506662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:27:20.403304    4327 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:27:20.403602    4327 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 13:27:20.403758    4327 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:27:20.406897    4327 out.go:171] Using Docker driver with root privileges
	I0908 13:27:20.409714    4327 cni.go:84] Creating CNI manager for ""
	I0908 13:27:20.409787    4327 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 13:27:20.409808    4327 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 13:27:20.409900    4327 start.go:348] cluster config:
	{Name:download-only-380759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-380759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:27:20.412861    4327 out.go:99] Starting "download-only-380759" primary control-plane node in "download-only-380759" cluster
	I0908 13:27:20.412882    4327 cache.go:123] Beginning downloading kic base image for docker with docker
	I0908 13:27:20.415657    4327 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:27:20.415688    4327 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 13:27:20.415864    4327 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:27:20.432656    4327 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:27:20.432782    4327 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:27:20.432807    4327 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 13:27:20.432813    4327 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 13:27:20.432823    4327 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 13:27:20.475285    4327 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0908 13:27:20.475317    4327 cache.go:58] Caching tarball of preloaded images
	I0908 13:27:20.475488    4327 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 13:27:20.478558    4327 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 13:27:20.478596    4327 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 ...
	I0908 13:27:20.569076    4327 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4?checksum=md5:0b3d43bc03104538fd9d40ba6a11edba -> /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0908 13:27:24.213814    4327 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 ...
	I0908 13:27:24.213916    4327 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21504-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 ...
	I0908 13:27:25.021023    4327 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 13:27:25.021411    4327 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/download-only-380759/config.json ...
	I0908 13:27:25.021446    4327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/download-only-380759/config.json: {Name:mk0f2399a8ac2f03173729fc42e5e454b45baac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:27:25.021646    4327 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 13:27:25.021797    4327 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21504-2320/.minikube/cache/linux/arm64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-380759 host does not exist
	  To start a cluster, run: "minikube start -p download-only-380759"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-380759
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 13:27:26.973223    4120 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-737040 --alsologtostderr --binary-mirror http://127.0.0.1:34793 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-737040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-737040
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (56.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-652152 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-652152 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (54.68246106s)
helpers_test.go:175: Cleaning up "offline-docker-652152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-652152
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-652152: (2.209770136s)
--- PASS: TestOffline (56.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-238540
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-238540: exit status 85 (70.064522ms)

                                                
                                                
-- stdout --
	* Profile "addons-238540" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-238540"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-238540
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-238540: exit status 85 (78.130363ms)

                                                
                                                
-- stdout --
	* Profile "addons-238540" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-238540"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (156.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-238540 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-238540 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m36.287067602s)
--- PASS: TestAddons/Setup (156.29s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.16s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 71.572518ms
addons_test.go:884: volcano-controller stabilized in 71.738999ms
addons_test.go:876: volcano-admission stabilized in 72.196646ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-hxvxb" [8bc075eb-c017-44ec-9c3c-8b0347270bcc] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005558716s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-qcl26" [66a2058f-49d9-4dc3-8796-fa4e15f1bee2] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003691236s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-czsvh" [e4402b9f-d6dd-4ecd-b969-100394a07f8b] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003135409s
addons_test.go:903: (dbg) Run:  kubectl --context addons-238540 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-238540 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-238540 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [ae0d0649-d8e0-4971-bf16-181c052329e5] Pending
helpers_test.go:352: "test-job-nginx-0" [ae0d0649-d8e0-4971-bf16-181c052329e5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [ae0d0649-d8e0-4971-bf16-181c052329e5] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004391186s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable volcano --alsologtostderr -v=1: (11.450042228s)
--- PASS: TestAddons/serial/Volcano (43.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-238540 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-238540 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-238540 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-238540 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [36712470-d505-46a4-8987-a8d85db4b15c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [36712470-d505-46a4-8987-a8d85db4b15c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003556351s
addons_test.go:694: (dbg) Run:  kubectl --context addons-238540 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-238540 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-238540 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-238540 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.248941ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-nmplg" [df4fd545-67e6-41f2-8522-1f0cadcdfe6c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003955352s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-4f4sg" [91efb800-f3be-4246-a900-6921815a3f8c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003587426s
addons_test.go:392: (dbg) Run:  kubectl --context addons-238540 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-238540 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-238540 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.065099365s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 ip
2025/09/08 13:31:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.97s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.07s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.488134ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-238540
addons_test.go:332: (dbg) Run:  kubectl --context addons-238540 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.07s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-238540 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-238540 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-238540 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [38d9a26a-0f02-4a33-93ab-e73481ce44e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [38d9a26a-0f02-4a33-93ab-e73481ce44e9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003844352s
I0908 13:31:55.216542    4120 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-238540 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable ingress-dns --alsologtostderr -v=1: (1.836523075s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable ingress --alsologtostderr -v=1: (7.838429801s)
--- PASS: TestAddons/parallel/Ingress (21.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-5bdx7" [6fd5b730-aefb-4931-a5dd-c9d17174af17] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004805792s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.833158ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-26x4k" [cf3c3c86-c16d-45e8-b147-a1c065d7ec82] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004182144s
addons_test.go:463: (dbg) Run:  kubectl --context addons-238540 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable metrics-server --alsologtostderr -v=1: (1.070233412s)
--- PASS: TestAddons/parallel/MetricsServer (6.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 13:31:25.236624    4120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 13:31:25.240154    4120 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 13:31:25.240178    4120 kapi.go:107] duration metric: took 6.518603ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.528835ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-238540 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-238540 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e3b41780-7366-4749-9c97-82cec454d7ce] Pending
helpers_test.go:352: "task-pv-pod" [e3b41780-7366-4749-9c97-82cec454d7ce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [e3b41780-7366-4749-9c97-82cec454d7ce] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.007074443s
addons_test.go:572: (dbg) Run:  kubectl --context addons-238540 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-238540 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-238540 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-238540 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-238540 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-238540 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-238540 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d8308554-e047-4398-b8ba-a776fddd9432] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d8308554-e047-4398-b8ba-a776fddd9432] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004044385s
addons_test.go:614: (dbg) Run:  kubectl --context addons-238540 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-238540 delete pod task-pv-pod-restore: (1.047621837s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-238540 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-238540 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.093078583s)
--- PASS: TestAddons/parallel/CSI (47.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-238540 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-238540 --alsologtostderr -v=1: (1.004617833s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-l72sv" [f5043030-4b6b-49b0-91a0-593bb8293074] Pending
helpers_test.go:352: "headlamp-6f46646d79-l72sv" [f5043030-4b6b-49b0-91a0-593bb8293074] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-l72sv" [f5043030-4b6b-49b0-91a0-593bb8293074] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.003774262s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable headlamp --alsologtostderr -v=1: (5.799208515s)
--- PASS: TestAddons/parallel/Headlamp (25.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-kgbdj" [9ee763b8-02a6-486b-adda-80ccf07b39bd] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002556638s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-238540 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-238540 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238540 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a8159c57-206c-4396-9ae7-7271ceb846de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a8159c57-206c-4396-9ae7-7271ceb846de] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a8159c57-206c-4396-9ae7-7271ceb846de] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002833398s
addons_test.go:967: (dbg) Run:  kubectl --context addons-238540 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 ssh "cat /opt/local-path-provisioner/pvc-bed066c9-11ee-4623-ae35-4f10aeddd573_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-238540 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-238540 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.032991256s)
--- PASS: TestAddons/parallel/LocalPath (52.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vp886" [1f3abb60-1a92-4152-a973-1a5c2ecc3106] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003485821s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-rdccn" [1c48cbe3-b82f-4ed2-97b1-36f3ae05fa12] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003492621s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238540 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-238540 addons disable yakd --alsologtostderr -v=1: (5.766404267s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-238540
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-238540: (10.820789761s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-238540
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-238540
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-238540
--- PASS: TestAddons/StoppedEnableDisable (11.09s)

                                                
                                    
x
+
TestCertOptions (42.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-005985 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-005985 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (39.966997142s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-005985 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-005985 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-005985 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-005985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-005985
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-005985: (2.186422607s)
--- PASS: TestCertOptions (42.79s)

                                                
                                    
x
+
TestCertExpiration (263.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611437 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611437 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.936283101s)
E0908 14:12:20.268495    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611437 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0908 14:15:03.989061    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:23.333546    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:15:30.791568    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611437 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (40.591976616s)
helpers_test.go:175: Cleaning up "cert-expiration-611437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-611437
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-611437: (2.339122047s)
--- PASS: TestCertExpiration (263.87s)

                                                
                                    
x
+
TestDockerFlags (42.3s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-992525 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-992525 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.700190561s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-992525 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-992525 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-992525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-992525
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-992525: (2.74639703s)
--- PASS: TestDockerFlags (42.30s)

                                                
                                    
x
+
TestForceSystemdFlag (44.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-511214 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-511214 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.524918614s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-511214 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-511214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-511214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-511214: (2.672426097s)
--- PASS: TestForceSystemdFlag (44.67s)

                                                
                                    
x
+
TestForceSystemdEnv (46.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-534874 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-534874 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.304816177s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-534874 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-534874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-534874
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-534874: (2.533185757s)
--- PASS: TestForceSystemdEnv (46.48s)

                                                
                                    
x
+
TestErrorSpam/setup (34.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-625525 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-625525 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-625525 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-625525 --driver=docker  --container-runtime=docker: (34.073799766s)
--- PASS: TestErrorSpam/setup (34.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (2.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 stop: (2.03156574s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625525 --log_dir /tmp/nospam-625525 stop
--- PASS: TestErrorSpam/stop (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21504-2320/.minikube/files/etc/test/nested/copy/4120/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082913 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0908 13:35:03.994255    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:04.001530    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:04.012925    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:04.034334    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:04.075817    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:04.157255    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:04.318801    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:04.640520    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:05.282607    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:06.564240    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:09.126894    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:14.248366    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-082913 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m13.565356101s)
--- PASS: TestFunctional/serial/StartWithProxy (73.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 13:35:16.402825    4120 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082913 --alsologtostderr -v=8
E0908 13:35:24.490314    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:44.971770    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-082913 --alsologtostderr -v=8: (50.823497934s)
functional_test.go:678: soft start took 50.830148639s for "functional-082913" cluster.
I0908 13:36:07.227221    4120 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (50.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-082913 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-082913 cache add registry.k8s.io/pause:3.3: (1.022559154s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-082913 /tmp/TestFunctionalserialCacheCmdcacheadd_local3980337115/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cache add minikube-local-cache-test:functional-082913
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cache delete minikube-local-cache-test:functional-082913
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-082913
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.613422ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 kubectl -- --context functional-082913 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-082913 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (57.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082913 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 13:36:25.934952    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-082913 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (57.315621221s)
functional_test.go:776: restart took 57.315744039s for "functional-082913" cluster.
I0908 13:37:11.139020    4120 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (57.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-082913 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-082913 logs: (1.211917408s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 logs --file /tmp/TestFunctionalserialLogsFileCmd1833911283/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-082913 logs --file /tmp/TestFunctionalserialLogsFileCmd1833911283/001/logs.txt: (1.203857914s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-082913 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-082913
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-082913: exit status 115 (396.085529ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32717 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-082913 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-082913 delete -f testdata/invalidsvc.yaml: (1.134878595s)
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 config get cpus: exit status 14 (104.7266ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 config get cpus: exit status 14 (68.219502ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-082913 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-082913 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 45283: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082913 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-082913 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (200.154396ms)

                                                
                                                
-- stdout --
	* [functional-082913] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:37:52.649608   45030 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:37:52.649725   45030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:37:52.649735   45030 out.go:374] Setting ErrFile to fd 2...
	I0908 13:37:52.649741   45030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:37:52.649994   45030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 13:37:52.650341   45030 out.go:368] Setting JSON to false
	I0908 13:37:52.651371   45030 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1221,"bootTime":1757337452,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0908 13:37:52.651437   45030 start.go:140] virtualization:  
	I0908 13:37:52.654699   45030 out.go:179] * [functional-082913] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:37:52.657720   45030 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:37:52.657768   45030 notify.go:220] Checking for updates...
	I0908 13:37:52.661746   45030 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:37:52.664633   45030 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	I0908 13:37:52.667524   45030 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	I0908 13:37:52.670284   45030 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:37:52.673319   45030 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:37:52.676763   45030 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:37:52.677398   45030 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:37:52.700956   45030 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:37:52.701054   45030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:37:52.778382   45030 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 13:37:52.76854339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:37:52.778494   45030 docker.go:318] overlay module found
	I0908 13:37:52.784339   45030 out.go:179] * Using the docker driver based on existing profile
	I0908 13:37:52.787176   45030 start.go:304] selected driver: docker
	I0908 13:37:52.787194   45030 start.go:918] validating driver "docker" against &{Name:functional-082913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-082913 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:37:52.787317   45030 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:37:52.790986   45030 out.go:203] 
	W0908 13:37:52.793908   45030 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 13:37:52.796693   45030 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082913 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082913 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-082913 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (245.05011ms)

                                                
                                                
-- stdout --
	* [functional-082913] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:37:52.439101   44930 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:37:52.439356   44930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:37:52.439385   44930 out.go:374] Setting ErrFile to fd 2...
	I0908 13:37:52.439404   44930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:37:52.440739   44930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 13:37:52.441220   44930 out.go:368] Setting JSON to false
	I0908 13:37:52.442221   44930 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1221,"bootTime":1757337452,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0908 13:37:52.442323   44930 start.go:140] virtualization:  
	I0908 13:37:52.446222   44930 out.go:179] * [functional-082913] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 13:37:52.449610   44930 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:37:52.449793   44930 notify.go:220] Checking for updates...
	I0908 13:37:52.456143   44930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:37:52.459032   44930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	I0908 13:37:52.461936   44930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	I0908 13:37:52.464833   44930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:37:52.467860   44930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:37:52.471246   44930 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:37:52.471804   44930 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:37:52.514044   44930 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:37:52.514284   44930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:37:52.576269   44930 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 13:37:52.56623969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:37:52.576380   44930 docker.go:318] overlay module found
	I0908 13:37:52.581441   44930 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 13:37:52.584411   44930 start.go:304] selected driver: docker
	I0908 13:37:52.584431   44930 start.go:918] validating driver "docker" against &{Name:functional-082913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-082913 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:37:52.584543   44930 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:37:52.589617   44930 out.go:203] 
	W0908 13:37:52.592423   44930 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 13:37:52.594860   44930 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-082913 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-082913 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2jhwk" [0bed9aa5-81a0-44e3-965d-b41ed2ad7ced] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2jhwk" [0bed9aa5-81a0-44e3-965d-b41ed2ad7ced] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004782767s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31812
functional_test.go:1680: http://192.168.49.2:31812: success! body:
Request served by hello-node-connect-7d85dfc575-2jhwk

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31812
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f8054420-b246-4637-841f-2ca231731f7b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004960573s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-082913 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-082913 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-082913 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-082913 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [47af03bb-174d-4a16-ab8e-059925044ada] Pending
helpers_test.go:352: "sp-pod" [47af03bb-174d-4a16-ab8e-059925044ada] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [47af03bb-174d-4a16-ab8e-059925044ada] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003282258s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-082913 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-082913 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-082913 delete -f testdata/storage-provisioner/pod.yaml: (1.06206714s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-082913 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e4e5bd39-905f-4c78-aa2c-ef3184a209fa] Pending
helpers_test.go:352: "sp-pod" [e4e5bd39-905f-4c78-aa2c-ef3184a209fa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e4e5bd39-905f-4c78-aa2c-ef3184a209fa] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003834941s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-082913 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh -n functional-082913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cp functional-082913:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3268723869/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh -n functional-082913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh -n functional-082913 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4120/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo cat /etc/test/nested/copy/4120/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo cat /etc/ssl/certs/4120.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo cat /usr/share/ca-certificates/4120.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo cat /etc/ssl/certs/41202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo cat /usr/share/ca-certificates/41202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-082913 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 ssh "sudo systemctl is-active crio": exit status 1 (288.291698ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082913 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082913 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-082913 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 42382: os: process already finished
helpers_test.go:519: unable to terminate pid 42194: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-082913 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082913 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-082913 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [85240131-84c7-4c67-aa08-ad129b91b337] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [85240131-84c7-4c67-aa08-ad129b91b337] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005065135s
I0908 13:37:29.723482    4120 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-082913 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.85.133 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-082913 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-082913 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-082913 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-5wt52" [ce2577cd-51ff-46d2-92f1-19247d0f47e2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-5wt52" [ce2577cd-51ff-46d2-92f1-19247d0f47e2] Running
E0908 13:37:47.856394    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005203292s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "487.035184ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "74.361214ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 service list -o json
functional_test.go:1504: Took "637.28226ms" to run "out/minikube-linux-arm64 -p functional-082913 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "436.217332ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "85.091048ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30759
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdany-port2485863839/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757338669617617545" to /tmp/TestFunctionalparallelMountCmdany-port2485863839/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757338669617617545" to /tmp/TestFunctionalparallelMountCmdany-port2485863839/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757338669617617545" to /tmp/TestFunctionalparallelMountCmdany-port2485863839/001/test-1757338669617617545
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (490.390427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:37:50.110288    4120 retry.go:31] will retry after 359.933712ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 13:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 13:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 13:37 test-1757338669617617545
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh cat /mount-9p/test-1757338669617617545
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-082913 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9aa5c5e0-8faa-46de-91ba-4f89095897ad] Pending
helpers_test.go:352: "busybox-mount" [9aa5c5e0-8faa-46de-91ba-4f89095897ad] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9aa5c5e0-8faa-46de-91ba-4f89095897ad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9aa5c5e0-8faa-46de-91ba-4f89095897ad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003952809s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-082913 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdany-port2485863839/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30759
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdspecific-port2874236382/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdspecific-port2874236382/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 ssh "sudo umount -f /mount-9p": exit status 1 (497.763067ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-082913 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdspecific-port2874236382/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3513052727/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3513052727/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3513052727/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T" /mount1: exit status 1 (1.057043364s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:38:01.917271    4120 retry.go:31] will retry after 451.34381ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-082913 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3513052727/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3513052727/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082913 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3513052727/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-082913 version -o=json --components: (1.291553211s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082913 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-082913
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-082913
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082913 image ls --format short --alsologtostderr:
I0908 13:38:11.148144   48111 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:11.148343   48111 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.148357   48111 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:11.148362   48111 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.148789   48111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
I0908 13:38:11.150115   48111 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.150401   48111 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.150937   48111 cli_runner.go:164] Run: docker container inspect functional-082913 --format={{.State.Status}}
I0908 13:38:11.172052   48111 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:11.172107   48111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082913
I0908 13:38:11.193188   48111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/functional-082913/id_rsa Username:docker}
I0908 13:38:11.291385   48111 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082913 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 20b332c9a70d8 │ 244MB  │
│ docker.io/kicbase/echo-server               │ functional-082913 │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kicbase/echo-server               │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
│ docker.io/library/nginx                     │ alpine            │ 35f3cbee4fb77 │ 52.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ d291939e99406 │ 83.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ 996be7e86d9b3 │ 71.5MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ a25f5ef9c34c3 │ 50.5MB │
│ docker.io/library/nginx                     │ latest            │ 47ef8710c9f5a │ 198MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ a422e0e982356 │ 42.3MB │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ docker.io/library/minikube-local-cache-test │ functional-082913 │ 2c2b375002df4 │ 30B    │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ 6fc32d66c1411 │ 74.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082913 image ls --format table --alsologtostderr:
I0908 13:38:12.041371   48329 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:12.041633   48329 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:12.041650   48329 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:12.041656   48329 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:12.042072   48329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
I0908 13:38:12.043273   48329 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:12.043489   48329 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:12.044022   48329 cli_runner.go:164] Run: docker container inspect functional-082913 --format={{.State.Status}}
I0908 13:38:12.068874   48329 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:12.068929   48329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082913
I0908 13:38:12.093028   48329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/functional-082913/id_rsa Username:docker}
I0908 13:38:12.185654   48329 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082913 image ls --format json --alsologtostderr:
[{"id":"47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"198000000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-082913","docker.io/kicbase/echo-server:latest"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"2c2b375002df4c299829229f4630fa45ab9b83b46600f87b0aa6ec2e3cb119f7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-082913"],"size":"30"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"s
ize":"52900000"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","r
epoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"83700000"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"50500000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"71500000"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"74700000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082913 image ls --format json --alsologtostderr:
I0908 13:38:11.803222   48269 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:11.803358   48269 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.803384   48269 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:11.803406   48269 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.803697   48269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
I0908 13:38:11.804366   48269 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.804537   48269 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.804995   48269 cli_runner.go:164] Run: docker container inspect functional-082913 --format={{.State.Status}}
I0908 13:38:11.823337   48269 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:11.823416   48269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082913
I0908 13:38:11.841074   48269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/functional-082913/id_rsa Username:docker}
I0908 13:38:11.935810   48269 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082913 image ls --format yaml --alsologtostderr:
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "50500000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "83700000"
- id: 47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "198000000"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52900000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-082913
- docker.io/kicbase/echo-server:latest
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 2c2b375002df4c299829229f4630fa45ab9b83b46600f87b0aa6ec2e3cb119f7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-082913
size: "30"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "74700000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "71500000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082913 image ls --format yaml --alsologtostderr:
I0908 13:38:11.375765   48143 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:11.376034   48143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.376058   48143 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:11.376094   48143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.376470   48143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
I0908 13:38:11.377156   48143 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.377321   48143 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.377790   48143 cli_runner.go:164] Run: docker container inspect functional-082913 --format={{.State.Status}}
I0908 13:38:11.413656   48143 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:11.413711   48143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082913
I0908 13:38:11.432776   48143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/functional-082913/id_rsa Username:docker}
I0908 13:38:11.529064   48143 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082913 ssh pgrep buildkitd: exit status 1 (295.477703ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image build -t localhost/my-image:functional-082913 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-082913 image build -t localhost/my-image:functional-082913 testdata/build --alsologtostderr: (3.241919382s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082913 image build -t localhost/my-image:functional-082913 testdata/build --alsologtostderr:
I0908 13:38:11.913832   48298 out.go:360] Setting OutFile to fd 1 ...
I0908 13:38:11.914957   48298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.915007   48298 out.go:374] Setting ErrFile to fd 2...
I0908 13:38:11.915027   48298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:38:11.915379   48298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
I0908 13:38:11.916350   48298 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.918574   48298 config.go:182] Loaded profile config "functional-082913": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 13:38:11.919253   48298 cli_runner.go:164] Run: docker container inspect functional-082913 --format={{.State.Status}}
I0908 13:38:11.942951   48298 ssh_runner.go:195] Run: systemctl --version
I0908 13:38:11.943014   48298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082913
I0908 13:38:11.967710   48298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/functional-082913/id_rsa Username:docker}
I0908 13:38:12.060034   48298 build_images.go:161] Building image from path: /tmp/build.3918680234.tar
I0908 13:38:12.060119   48298 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 13:38:12.072161   48298 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3918680234.tar
I0908 13:38:12.077267   48298 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3918680234.tar: stat -c "%s %y" /var/lib/minikube/build/build.3918680234.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3918680234.tar': No such file or directory
I0908 13:38:12.077300   48298 ssh_runner.go:362] scp /tmp/build.3918680234.tar --> /var/lib/minikube/build/build.3918680234.tar (3072 bytes)
I0908 13:38:12.105483   48298 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3918680234
I0908 13:38:12.118077   48298 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3918680234 -xf /var/lib/minikube/build/build.3918680234.tar
I0908 13:38:12.133122   48298 docker.go:361] Building image: /var/lib/minikube/build/build.3918680234
I0908 13:38:12.133209   48298 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-082913 /var/lib/minikube/build/build.3918680234
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:43fbbeaa250df3d4a138151b948f35dd230d8721153bb860822958339e9db2ff done
#8 naming to localhost/my-image:functional-082913 done
#8 DONE 0.1s
I0908 13:38:15.059353   48298 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-082913 /var/lib/minikube/build/build.3918680234: (2.926107023s)
I0908 13:38:15.059448   48298 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3918680234
I0908 13:38:15.077448   48298 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3918680234.tar
I0908 13:38:15.087730   48298 build_images.go:217] Built localhost/my-image:functional-082913 from /tmp/build.3918680234.tar
I0908 13:38:15.087762   48298 build_images.go:133] succeeded building to: functional-082913
I0908 13:38:15.087767   48298 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-082913
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image load --daemon kicbase/echo-server:functional-082913 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image load --daemon kicbase/echo-server:functional-082913 --alsologtostderr
2025/09/08 13:38:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-082913 docker-env) && out/minikube-linux-arm64 status -p functional-082913"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-082913 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-082913
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image load --daemon kicbase/echo-server:functional-082913 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image save kicbase/echo-server:functional-082913 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image rm kicbase/echo-server:functional-082913 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-082913
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-082913 image save --daemon kicbase/echo-server:functional-082913 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-082913
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-082913
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-082913
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-082913
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0908 13:40:03.989194    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m12.840293241s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
E0908 13:40:31.698411    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (133.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (44.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 kubectl -- rollout status deployment/busybox: (5.007528855s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0908 13:40:37.214108    4120 retry.go:31] will retry after 957.22907ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0908 13:40:38.350861    4120 retry.go:31] will retry after 1.566501272s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0908 13:40:40.120178    4120 retry.go:31] will retry after 2.985173918s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0908 13:40:43.282052    4120 retry.go:31] will retry after 2.065681481s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0908 13:40:45.570042    4120 retry.go:31] will retry after 5.450681026s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0908 13:40:51.210182    4120 retry.go:31] will retry after 10.455327436s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0908 13:41:01.851112    4120 retry.go:31] will retry after 11.56625402s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-s4h2n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wfg67 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wp89j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-s4h2n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wfg67 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wp89j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-s4h2n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wfg67 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wp89j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (44.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-s4h2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-s4h2n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wfg67 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wfg67 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wp89j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 kubectl -- exec busybox-7b57f96db7-wp89j -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 node add --alsologtostderr -v 5: (18.40335792s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5: (1.554793348s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-894946 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.353582931s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 status --output json --alsologtostderr -v 5: (1.3580801s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp testdata/cp-test.txt ha-894946:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2724732651/001/cp-test_ha-894946.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946:/home/docker/cp-test.txt ha-894946-m02:/home/docker/cp-test_ha-894946_ha-894946-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test_ha-894946_ha-894946-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946:/home/docker/cp-test.txt ha-894946-m03:/home/docker/cp-test_ha-894946_ha-894946-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test_ha-894946_ha-894946-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946:/home/docker/cp-test.txt ha-894946-m04:/home/docker/cp-test_ha-894946_ha-894946-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test_ha-894946_ha-894946-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp testdata/cp-test.txt ha-894946-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2724732651/001/cp-test_ha-894946-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m02:/home/docker/cp-test.txt ha-894946:/home/docker/cp-test_ha-894946-m02_ha-894946.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test_ha-894946-m02_ha-894946.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m02:/home/docker/cp-test.txt ha-894946-m03:/home/docker/cp-test_ha-894946-m02_ha-894946-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test_ha-894946-m02_ha-894946-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m02:/home/docker/cp-test.txt ha-894946-m04:/home/docker/cp-test_ha-894946-m02_ha-894946-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test_ha-894946-m02_ha-894946-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp testdata/cp-test.txt ha-894946-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2724732651/001/cp-test_ha-894946-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m03:/home/docker/cp-test.txt ha-894946:/home/docker/cp-test_ha-894946-m03_ha-894946.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test_ha-894946-m03_ha-894946.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m03:/home/docker/cp-test.txt ha-894946-m02:/home/docker/cp-test_ha-894946-m03_ha-894946-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test_ha-894946-m03_ha-894946-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m03:/home/docker/cp-test.txt ha-894946-m04:/home/docker/cp-test_ha-894946-m03_ha-894946-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test_ha-894946-m03_ha-894946-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp testdata/cp-test.txt ha-894946-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2724732651/001/cp-test_ha-894946-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m04:/home/docker/cp-test.txt ha-894946:/home/docker/cp-test_ha-894946-m04_ha-894946.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946 "sudo cat /home/docker/cp-test_ha-894946-m04_ha-894946.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m04:/home/docker/cp-test.txt ha-894946-m02:/home/docker/cp-test_ha-894946-m04_ha-894946-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m02 "sudo cat /home/docker/cp-test_ha-894946-m04_ha-894946-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 cp ha-894946-m04:/home/docker/cp-test.txt ha-894946-m03:/home/docker/cp-test_ha-894946-m04_ha-894946-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 ssh -n ha-894946-m03 "sudo cat /home/docker/cp-test_ha-894946-m04_ha-894946-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 node stop m02 --alsologtostderr -v 5: (10.981398936s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5: exit status 7 (772.456888ms)

                                                
                                                
-- stdout --
	ha-894946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-894946-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-894946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-894946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:42:12.255342   71277 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:42:12.255460   71277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:12.255471   71277 out.go:374] Setting ErrFile to fd 2...
	I0908 13:42:12.255476   71277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:42:12.255743   71277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 13:42:12.255999   71277 out.go:368] Setting JSON to false
	I0908 13:42:12.256058   71277 mustload.go:65] Loading cluster: ha-894946
	I0908 13:42:12.256124   71277 notify.go:220] Checking for updates...
	I0908 13:42:12.256490   71277 config.go:182] Loaded profile config "ha-894946": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:42:12.256517   71277 status.go:174] checking status of ha-894946 ...
	I0908 13:42:12.257416   71277 cli_runner.go:164] Run: docker container inspect ha-894946 --format={{.State.Status}}
	I0908 13:42:12.276764   71277 status.go:371] ha-894946 host status = "Running" (err=<nil>)
	I0908 13:42:12.276796   71277 host.go:66] Checking if "ha-894946" exists ...
	I0908 13:42:12.277096   71277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894946
	I0908 13:42:12.302670   71277 host.go:66] Checking if "ha-894946" exists ...
	I0908 13:42:12.303022   71277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:42:12.303078   71277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894946
	I0908 13:42:12.327573   71277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/ha-894946/id_rsa Username:docker}
	I0908 13:42:12.420825   71277 ssh_runner.go:195] Run: systemctl --version
	I0908 13:42:12.425262   71277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:42:12.439612   71277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:42:12.508832   71277 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-08 13:42:12.496805838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:42:12.509400   71277 kubeconfig.go:125] found "ha-894946" server: "https://192.168.49.254:8443"
	I0908 13:42:12.509444   71277 api_server.go:166] Checking apiserver status ...
	I0908 13:42:12.509496   71277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:42:12.521956   71277 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2279/cgroup
	I0908 13:42:12.534719   71277 api_server.go:182] apiserver freezer: "11:freezer:/docker/b9173430ae5c20d7b09c9f8d31b5706e0f1a6df5b4fc4e60608cb10668cb6156/kubepods/burstable/poda7c0e26855061832eadb1526b1679c30/069943223a602acbf8d4d7ee1bb62436b1106ec3f31f4268042deed054d5b237"
	I0908 13:42:12.534836   71277 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9173430ae5c20d7b09c9f8d31b5706e0f1a6df5b4fc4e60608cb10668cb6156/kubepods/burstable/poda7c0e26855061832eadb1526b1679c30/069943223a602acbf8d4d7ee1bb62436b1106ec3f31f4268042deed054d5b237/freezer.state
	I0908 13:42:12.547975   71277 api_server.go:204] freezer state: "THAWED"
	I0908 13:42:12.548012   71277 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 13:42:12.556265   71277 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 13:42:12.556294   71277 status.go:463] ha-894946 apiserver status = Running (err=<nil>)
	I0908 13:42:12.556305   71277 status.go:176] ha-894946 status: &{Name:ha-894946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:42:12.556322   71277 status.go:174] checking status of ha-894946-m02 ...
	I0908 13:42:12.556646   71277 cli_runner.go:164] Run: docker container inspect ha-894946-m02 --format={{.State.Status}}
	I0908 13:42:12.575384   71277 status.go:371] ha-894946-m02 host status = "Stopped" (err=<nil>)
	I0908 13:42:12.575426   71277 status.go:384] host is not running, skipping remaining checks
	I0908 13:42:12.575435   71277 status.go:176] ha-894946-m02 status: &{Name:ha-894946-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:42:12.575715   71277 status.go:174] checking status of ha-894946-m03 ...
	I0908 13:42:12.576064   71277 cli_runner.go:164] Run: docker container inspect ha-894946-m03 --format={{.State.Status}}
	I0908 13:42:12.594550   71277 status.go:371] ha-894946-m03 host status = "Running" (err=<nil>)
	I0908 13:42:12.594586   71277 host.go:66] Checking if "ha-894946-m03" exists ...
	I0908 13:42:12.595029   71277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894946-m03
	I0908 13:42:12.614034   71277 host.go:66] Checking if "ha-894946-m03" exists ...
	I0908 13:42:12.614346   71277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:42:12.614391   71277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894946-m03
	I0908 13:42:12.648447   71277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/ha-894946-m03/id_rsa Username:docker}
	I0908 13:42:12.740373   71277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:42:12.752852   71277 kubeconfig.go:125] found "ha-894946" server: "https://192.168.49.254:8443"
	I0908 13:42:12.752883   71277 api_server.go:166] Checking apiserver status ...
	I0908 13:42:12.752931   71277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:42:12.765696   71277 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2190/cgroup
	I0908 13:42:12.776773   71277 api_server.go:182] apiserver freezer: "11:freezer:/docker/f4991eb786ecadbd9a743bbce12215938177afeea8567172344a0151048cc439/kubepods/burstable/podff967a6d33de253870deaa4627ecff03/fcb1ebc85e4698f22fe344f0d169c482234d4c1dbaada5780a8cf18a8738df1a"
	I0908 13:42:12.776855   71277 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f4991eb786ecadbd9a743bbce12215938177afeea8567172344a0151048cc439/kubepods/burstable/podff967a6d33de253870deaa4627ecff03/fcb1ebc85e4698f22fe344f0d169c482234d4c1dbaada5780a8cf18a8738df1a/freezer.state
	I0908 13:42:12.788164   71277 api_server.go:204] freezer state: "THAWED"
	I0908 13:42:12.788235   71277 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 13:42:12.797026   71277 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 13:42:12.797056   71277 status.go:463] ha-894946-m03 apiserver status = Running (err=<nil>)
	I0908 13:42:12.797066   71277 status.go:176] ha-894946-m03 status: &{Name:ha-894946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:42:12.797116   71277 status.go:174] checking status of ha-894946-m04 ...
	I0908 13:42:12.797466   71277 cli_runner.go:164] Run: docker container inspect ha-894946-m04 --format={{.State.Status}}
	I0908 13:42:12.816417   71277 status.go:371] ha-894946-m04 host status = "Running" (err=<nil>)
	I0908 13:42:12.816452   71277 host.go:66] Checking if "ha-894946-m04" exists ...
	I0908 13:42:12.816744   71277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894946-m04
	I0908 13:42:12.838870   71277 host.go:66] Checking if "ha-894946-m04" exists ...
	I0908 13:42:12.839198   71277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:42:12.839241   71277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894946-m04
	I0908 13:42:12.857490   71277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/ha-894946-m04/id_rsa Username:docker}
	I0908 13:42:12.951120   71277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:42:12.967405   71277 status.go:176] ha-894946-m04 status: &{Name:ha-894946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 node start m02 --alsologtostderr -v 5
E0908 13:42:20.268727    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:20.275085    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:20.286417    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:20.307777    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:20.349113    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:20.430452    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:20.591946    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:20.913376    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:21.555325    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:22.836720    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:25.398991    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:30.521199    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:40.763123    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:01.244463    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 node start m02 --alsologtostderr -v 5: (49.527413536s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5: (1.324767237s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (50.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.128425621s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (239.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 stop --alsologtostderr -v 5: (34.145275887s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 start --wait true --alsologtostderr -v 5
E0908 13:43:42.206244    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:03.989304    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:04.128694    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 start --wait true --alsologtostderr -v 5: (3m25.601751362s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (239.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 node delete m03 --alsologtostderr -v 5: (11.268254189s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 stop --alsologtostderr -v 5
E0908 13:47:20.268803    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:47:47.970249    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 stop --alsologtostderr -v 5: (32.67261569s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5: exit status 7 (115.908042ms)

                                                
                                                
-- stdout --
	ha-894946
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-894946-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-894946-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:47:51.585206   99247 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:47:51.585407   99247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:47:51.585421   99247 out.go:374] Setting ErrFile to fd 2...
	I0908 13:47:51.585427   99247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:47:51.585709   99247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 13:47:51.585933   99247 out.go:368] Setting JSON to false
	I0908 13:47:51.585988   99247 mustload.go:65] Loading cluster: ha-894946
	I0908 13:47:51.586084   99247 notify.go:220] Checking for updates...
	I0908 13:47:51.586466   99247 config.go:182] Loaded profile config "ha-894946": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:47:51.586495   99247 status.go:174] checking status of ha-894946 ...
	I0908 13:47:51.587082   99247 cli_runner.go:164] Run: docker container inspect ha-894946 --format={{.State.Status}}
	I0908 13:47:51.605931   99247 status.go:371] ha-894946 host status = "Stopped" (err=<nil>)
	I0908 13:47:51.605954   99247 status.go:384] host is not running, skipping remaining checks
	I0908 13:47:51.605961   99247 status.go:176] ha-894946 status: &{Name:ha-894946 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:47:51.606001   99247 status.go:174] checking status of ha-894946-m02 ...
	I0908 13:47:51.606322   99247 cli_runner.go:164] Run: docker container inspect ha-894946-m02 --format={{.State.Status}}
	I0908 13:47:51.631296   99247 status.go:371] ha-894946-m02 host status = "Stopped" (err=<nil>)
	I0908 13:47:51.631318   99247 status.go:384] host is not running, skipping remaining checks
	I0908 13:47:51.631326   99247 status.go:176] ha-894946-m02 status: &{Name:ha-894946-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:47:51.631348   99247 status.go:174] checking status of ha-894946-m04 ...
	I0908 13:47:51.631631   99247 cli_runner.go:164] Run: docker container inspect ha-894946-m04 --format={{.State.Status}}
	I0908 13:47:51.653448   99247 status.go:371] ha-894946-m04 host status = "Stopped" (err=<nil>)
	I0908 13:47:51.653471   99247 status.go:384] host is not running, skipping remaining checks
	I0908 13:47:51.653479   99247 status.go:176] ha-894946-m04 status: &{Name:ha-894946-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m40.226545226s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 node add --control-plane --alsologtostderr -v 5
E0908 13:50:03.989340    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 node add --control-plane --alsologtostderr -v 5: (42.173508157s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-894946 status --alsologtostderr -v 5: (1.415971965s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.584904391s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.59s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-980049 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-980049 --driver=docker  --container-runtime=docker: (38.075093243s)
--- PASS: TestImageBuild/serial/Setup (38.08s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-980049
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-980049: (1.681780066s)
--- PASS: TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-980049
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-980049
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-980049
image_test.go:88: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-980049: (1.023442872s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-010911 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E0908 13:51:27.060561    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-010911 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (48.849061965s)
--- PASS: TestJSONOutput/start/Command (48.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-010911 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-010911 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-010911 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-010911 --output=json --user=testUser: (10.959723305s)
--- PASS: TestJSONOutput/stop/Command (10.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-230263 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-230263 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (103.123157ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2eb1aefa-3cf5-47f7-9bbc-7c36ecb6eeeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-230263] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"de5320c3-f853-414f-9462-69e251dec74d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"bf281795-6bb1-4caf-ac84-681b4e884237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"237b4036-ac9a-4672-8845-b9ec4f43bdd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig"}}
	{"specversion":"1.0","id":"2e3c9769-cf52-48a0-b324-74546cfcf453","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube"}}
	{"specversion":"1.0","id":"87a6b69f-693b-4657-acee-49fa10c13328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a84a1022-a19e-4ab7-acf2-e659321bf87a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"00622079-6ffc-4f41-8c84-a3520f6d2242","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-230263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-230263
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-603902 --network=
E0908 13:52:20.269741    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-603902 --network=: (33.144949061s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-603902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-603902
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-603902: (2.149550558s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-554360 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-554360 --network=bridge: (36.272888526s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-554360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-554360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-554360: (2.026414789s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.33s)

                                                
                                    
x
+
TestKicExistingNetwork (36.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 13:53:30.163619    4120 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 13:53:30.180058    4120 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 13:53:30.180139    4120 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 13:53:30.180159    4120 cli_runner.go:164] Run: docker network inspect existing-network
W0908 13:53:30.196779    4120 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 13:53:30.196811    4120 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 13:53:30.196829    4120 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 13:53:30.196952    4120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 13:53:30.219855    4120 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-128e80606eed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:f3:36:ea:cc:a6} reservation:<nil>}
I0908 13:53:30.220163    4120 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a34be0}
I0908 13:53:30.220187    4120 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 13:53:30.220241    4120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 13:53:30.301082    4120 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-106936 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-106936 --network=existing-network: (33.957192843s)
helpers_test.go:175: Cleaning up "existing-network-106936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-106936
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-106936: (2.091640366s)
I0908 13:54:06.368644    4120 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.22s)

                                                
                                    
x
+
TestKicCustomSubnet (36.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-725895 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-725895 --subnet=192.168.60.0/24: (33.828725844s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-725895 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-725895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-725895
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-725895: (2.191199432s)
--- PASS: TestKicCustomSubnet (36.05s)

                                                
                                    
x
+
TestKicStaticIP (34.31s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-448991 --static-ip=192.168.200.200
E0908 13:55:03.989018    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-448991 --static-ip=192.168.200.200: (31.971267462s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-448991 ip
helpers_test.go:175: Cleaning up "static-ip-448991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-448991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-448991: (2.172278167s)
--- PASS: TestKicStaticIP (34.31s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (79.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-468963 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-468963 --driver=docker  --container-runtime=docker: (38.51261903s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-471564 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-471564 --driver=docker  --container-runtime=docker: (34.956932635s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-468963
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-471564
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-471564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-471564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-471564: (2.092280611s)
helpers_test.go:175: Cleaning up "first-468963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-468963
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-468963: (2.158833023s)
--- PASS: TestMinikubeProfile (79.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-288738 --memory=3072 --mount-string /tmp/TestMountStartserial347000927/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-288738 --memory=3072 --mount-string /tmp/TestMountStartserial347000927/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.549223753s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-288738 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-290559 --memory=3072 --mount-string /tmp/TestMountStartserial347000927/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-290559 --memory=3072 --mount-string /tmp/TestMountStartserial347000927/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.536635211s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-290559 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-288738 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-288738 --alsologtostderr -v=5: (1.467486368s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-290559 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-290559
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-290559: (1.191554043s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-290559
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-290559: (7.873232657s)
--- PASS: TestMountStart/serial/RestartStopped (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-290559 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025632 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0908 13:57:20.268900    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025632 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m3.411735416s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (48.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-025632 -- rollout status deployment/busybox: (5.12896716s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0908 13:58:17.807320    4120 retry.go:31] will retry after 1.310427368s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0908 13:58:19.260105    4120 retry.go:31] will retry after 1.127071016s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0908 13:58:20.534272    4120 retry.go:31] will retry after 1.993619924s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0908 13:58:22.670874    4120 retry.go:31] will retry after 2.940945674s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0908 13:58:25.761813    4120 retry.go:31] will retry after 6.526577838s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0908 13:58:32.428199    4120 retry.go:31] will retry after 9.684463206s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0908 13:58:42.296949    4120 retry.go:31] will retry after 16.430684376s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E0908 13:58:43.332122    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-4fmsj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-zpx2n -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-4fmsj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-zpx2n -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-4fmsj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-zpx2n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (48.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-4fmsj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-4fmsj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-zpx2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-025632 -- exec busybox-7b57f96db7-zpx2n -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-025632 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-025632 -v=5 --alsologtostderr: (16.113674024s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-025632 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp testdata/cp-test.txt multinode-025632:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1688597384/001/cp-test_multinode-025632.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632:/home/docker/cp-test.txt multinode-025632-m02:/home/docker/cp-test_multinode-025632_multinode-025632-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m02 "sudo cat /home/docker/cp-test_multinode-025632_multinode-025632-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632:/home/docker/cp-test.txt multinode-025632-m03:/home/docker/cp-test_multinode-025632_multinode-025632-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m03 "sudo cat /home/docker/cp-test_multinode-025632_multinode-025632-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp testdata/cp-test.txt multinode-025632-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1688597384/001/cp-test_multinode-025632-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632-m02:/home/docker/cp-test.txt multinode-025632:/home/docker/cp-test_multinode-025632-m02_multinode-025632.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632 "sudo cat /home/docker/cp-test_multinode-025632-m02_multinode-025632.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632-m02:/home/docker/cp-test.txt multinode-025632-m03:/home/docker/cp-test_multinode-025632-m02_multinode-025632-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m03 "sudo cat /home/docker/cp-test_multinode-025632-m02_multinode-025632-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp testdata/cp-test.txt multinode-025632-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1688597384/001/cp-test_multinode-025632-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632-m03:/home/docker/cp-test.txt multinode-025632:/home/docker/cp-test_multinode-025632-m03_multinode-025632.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632 "sudo cat /home/docker/cp-test_multinode-025632-m03_multinode-025632.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 cp multinode-025632-m03:/home/docker/cp-test.txt multinode-025632-m02:/home/docker/cp-test_multinode-025632-m03_multinode-025632-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 ssh -n multinode-025632-m02 "sudo cat /home/docker/cp-test_multinode-025632-m03_multinode-025632-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-025632 node stop m03: (1.19817872s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025632 status: exit status 7 (531.290828ms)

                                                
                                                
-- stdout --
	multinode-025632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr: exit status 7 (520.388421ms)

                                                
                                                
-- stdout --
	multinode-025632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:59:32.457298  173674 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:59:32.457509  173674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:59:32.457536  173674 out.go:374] Setting ErrFile to fd 2...
	I0908 13:59:32.457555  173674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:59:32.457907  173674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 13:59:32.458198  173674 out.go:368] Setting JSON to false
	I0908 13:59:32.458269  173674 mustload.go:65] Loading cluster: multinode-025632
	I0908 13:59:32.458371  173674 notify.go:220] Checking for updates...
	I0908 13:59:32.458806  173674 config.go:182] Loaded profile config "multinode-025632": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:59:32.458852  173674 status.go:174] checking status of multinode-025632 ...
	I0908 13:59:32.459472  173674 cli_runner.go:164] Run: docker container inspect multinode-025632 --format={{.State.Status}}
	I0908 13:59:32.479340  173674 status.go:371] multinode-025632 host status = "Running" (err=<nil>)
	I0908 13:59:32.479366  173674 host.go:66] Checking if "multinode-025632" exists ...
	I0908 13:59:32.479670  173674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025632
	I0908 13:59:32.505192  173674 host.go:66] Checking if "multinode-025632" exists ...
	I0908 13:59:32.505509  173674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:59:32.505559  173674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025632
	I0908 13:59:32.524607  173674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/multinode-025632/id_rsa Username:docker}
	I0908 13:59:32.616465  173674 ssh_runner.go:195] Run: systemctl --version
	I0908 13:59:32.620949  173674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:59:32.634110  173674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:59:32.696604  173674 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 13:59:32.687252869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:59:32.697358  173674 kubeconfig.go:125] found "multinode-025632" server: "https://192.168.67.2:8443"
	I0908 13:59:32.697397  173674 api_server.go:166] Checking apiserver status ...
	I0908 13:59:32.697458  173674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:59:32.709590  173674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2206/cgroup
	I0908 13:59:32.720890  173674 api_server.go:182] apiserver freezer: "11:freezer:/docker/a5bc556032d0f746777e3f6bb26059827098991a7f6c9fa338f6860ba1a57889/kubepods/burstable/pod9726573f3032c5ca8046dff52aa59669/7d883fc2961d516f6c18ed904927c40dc40b44f32051283af4e7fccf85d7156a"
	I0908 13:59:32.720957  173674 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a5bc556032d0f746777e3f6bb26059827098991a7f6c9fa338f6860ba1a57889/kubepods/burstable/pod9726573f3032c5ca8046dff52aa59669/7d883fc2961d516f6c18ed904927c40dc40b44f32051283af4e7fccf85d7156a/freezer.state
	I0908 13:59:32.730903  173674 api_server.go:204] freezer state: "THAWED"
	I0908 13:59:32.730941  173674 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 13:59:32.740849  173674 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 13:59:32.740880  173674 status.go:463] multinode-025632 apiserver status = Running (err=<nil>)
	I0908 13:59:32.740901  173674 status.go:176] multinode-025632 status: &{Name:multinode-025632 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:59:32.740918  173674 status.go:174] checking status of multinode-025632-m02 ...
	I0908 13:59:32.741264  173674 cli_runner.go:164] Run: docker container inspect multinode-025632-m02 --format={{.State.Status}}
	I0908 13:59:32.762226  173674 status.go:371] multinode-025632-m02 host status = "Running" (err=<nil>)
	I0908 13:59:32.762253  173674 host.go:66] Checking if "multinode-025632-m02" exists ...
	I0908 13:59:32.762577  173674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-025632-m02
	I0908 13:59:32.780694  173674 host.go:66] Checking if "multinode-025632-m02" exists ...
	I0908 13:59:32.780999  173674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:59:32.781043  173674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-025632-m02
	I0908 13:59:32.798017  173674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21504-2320/.minikube/machines/multinode-025632-m02/id_rsa Username:docker}
	I0908 13:59:32.888221  173674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:59:32.900492  173674 status.go:176] multinode-025632-m02 status: &{Name:multinode-025632-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:59:32.900524  173674 status.go:174] checking status of multinode-025632-m03 ...
	I0908 13:59:32.900834  173674 cli_runner.go:164] Run: docker container inspect multinode-025632-m03 --format={{.State.Status}}
	I0908 13:59:32.917615  173674 status.go:371] multinode-025632-m03 host status = "Stopped" (err=<nil>)
	I0908 13:59:32.917653  173674 status.go:384] host is not running, skipping remaining checks
	I0908 13:59:32.917661  173674 status.go:176] multinode-025632-m03 status: &{Name:multinode-025632-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-025632 node start m03 -v=5 --alsologtostderr: (9.22244471s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-025632
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-025632
E0908 14:00:03.989304    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-025632: (22.613981137s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025632 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025632 --wait=true -v=5 --alsologtostderr: (56.477106609s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-025632
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-025632 node delete m03: (4.940749789s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-025632 stop: (21.462613999s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025632 status: exit status 7 (99.751474ms)

                                                
                                                
-- stdout --
	multinode-025632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-025632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr: exit status 7 (109.88888ms)

                                                
                                                
-- stdout --
	multinode-025632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-025632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:01:29.443602  186865 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:01:29.443822  186865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:01:29.443849  186865 out.go:374] Setting ErrFile to fd 2...
	I0908 14:01:29.443867  186865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:01:29.444158  186865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-2320/.minikube/bin
	I0908 14:01:29.444391  186865 out.go:368] Setting JSON to false
	I0908 14:01:29.444459  186865 mustload.go:65] Loading cluster: multinode-025632
	I0908 14:01:29.444525  186865 notify.go:220] Checking for updates...
	I0908 14:01:29.445803  186865 config.go:182] Loaded profile config "multinode-025632": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 14:01:29.445869  186865 status.go:174] checking status of multinode-025632 ...
	I0908 14:01:29.446664  186865 cli_runner.go:164] Run: docker container inspect multinode-025632 --format={{.State.Status}}
	I0908 14:01:29.466035  186865 status.go:371] multinode-025632 host status = "Stopped" (err=<nil>)
	I0908 14:01:29.466057  186865 status.go:384] host is not running, skipping remaining checks
	I0908 14:01:29.466064  186865 status.go:176] multinode-025632 status: &{Name:multinode-025632 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:01:29.466105  186865 status.go:174] checking status of multinode-025632-m02 ...
	I0908 14:01:29.466400  186865 cli_runner.go:164] Run: docker container inspect multinode-025632-m02 --format={{.State.Status}}
	I0908 14:01:29.496318  186865 status.go:371] multinode-025632-m02 host status = "Stopped" (err=<nil>)
	I0908 14:01:29.496341  186865 status.go:384] host is not running, skipping remaining checks
	I0908 14:01:29.496349  186865 status.go:176] multinode-025632-m02 status: &{Name:multinode-025632-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025632 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0908 14:02:20.268899    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025632 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (51.147532879s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-025632 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-025632
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025632-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-025632-m02 --driver=docker  --container-runtime=docker: exit status 14 (107.942505ms)

                                                
                                                
-- stdout --
	* [multinode-025632-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-025632-m02' is duplicated with machine name 'multinode-025632-m02' in profile 'multinode-025632'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-025632-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-025632-m03 --driver=docker  --container-runtime=docker: (34.525243796s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-025632
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-025632: exit status 80 (780.552045ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-025632 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-025632-m03 already exists in multinode-025632-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-025632-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-025632-m03: (2.182910087s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.65s)

                                                
                                    
x
+
TestPreload (122.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-350973 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-350973 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (50.007611646s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-350973 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-350973 image pull gcr.io/k8s-minikube/busybox: (2.33020375s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-350973
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-350973: (10.901202925s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-350973 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-350973 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (56.787094886s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-350973 image list
helpers_test.go:175: Cleaning up "test-preload-350973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-350973
E0908 14:05:03.989115    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-350973: (2.236402811s)
--- PASS: TestPreload (122.48s)

                                                
                                    
x
+
TestSkaffold (138.5s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1737227053 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-797663 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-797663 --memory=3072 --driver=docker  --container-runtime=docker: (29.593268314s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1737227053 run --minikube-profile skaffold-797663 --kube-context skaffold-797663 --status-check=true --port-forward=false --interactive=false
E0908 14:07:20.269640    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1737227053 run --minikube-profile skaffold-797663 --kube-context skaffold-797663 --status-check=true --port-forward=false --interactive=false: (1m32.783239976s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-5f84845d47-v4zzz" [0b5b23e2-0b30-468e-b39e-fff42f846af2] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002661432s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-8557c5dd9d-l7v7n" [8bac50a0-570d-4716-b95b-bd997402c07c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003579971s
helpers_test.go:175: Cleaning up "skaffold-797663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-797663
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-797663: (3.329136773s)
--- PASS: TestSkaffold (138.50s)

                                                
                                    
x
+
TestInsufficientStorage (11.04s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-001552 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
E0908 14:08:07.062683    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-001552 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.773651758s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5e7e92c-7144-43dc-81fd-6af571271489","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-001552] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"542f10b8-e4f7-45da-9367-d76084879002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"76424e4a-37ab-4e08-985c-1b7505c6b583","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f960d513-738a-4a74-b936-98f5cf01163f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig"}}
	{"specversion":"1.0","id":"a3f6078c-6ace-4ed9-8631-499f3d9b52d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube"}}
	{"specversion":"1.0","id":"4c6b5a8e-4f87-4515-827c-94b05e6663e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"961dae06-c61e-4cdf-a64b-eb3965574120","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80d30104-8336-47f1-94d6-19abf2fe7246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"62a5b349-0242-4d7b-b654-3deab4974881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9122c50c-7724-4509-89d8-a63d28f1689c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb406425-0070-4680-9476-efd7b4af20a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d0aa8895-ea71-4de2-95a5-774cfe7304e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-001552\" primary control-plane node in \"insufficient-storage-001552\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7ad2dda-13e1-4759-9ae4-1c63e6724d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"10c486a0-937c-465c-8740-d21d5935fbde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ff7e560-0034-443a-8b74-a4da8c8e653e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-001552 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-001552 --output=json --layout=cluster: exit status 7 (287.010489ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001552","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001552","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:08:10.334942  219955 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-001552" does not appear in /home/jenkins/minikube-integration/21504-2320/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-001552 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-001552 --output=json --layout=cluster: exit status 7 (297.650266ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001552","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001552","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:08:10.631498  220019 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-001552" does not appear in /home/jenkins/minikube-integration/21504-2320/kubeconfig
	E0908 14:08:10.641857  220019 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/insufficient-storage-001552/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-001552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-001552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-001552: (1.685544897s)
--- PASS: TestInsufficientStorage (11.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1730238555 start -p running-upgrade-443694 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1730238555 start -p running-upgrade-443694 --memory=3072 --vm-driver=docker  --container-runtime=docker: (36.193433934s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-443694 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-443694 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.691500975s)
helpers_test.go:175: Cleaning up "running-upgrade-443694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-443694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-443694: (2.252104285s)
--- PASS: TestRunningBinaryUpgrade (80.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (372.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 14:12:46.930178    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:46.936602    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:46.948069    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:46.969545    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:47.010927    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:47.092351    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:47.253931    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:47.575599    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:48.217686    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:49.499284    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:52.061820    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:57.183428    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:13:07.425769    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.614697768s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-151529
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-151529: (1.915374862s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-151529 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-151529 status --format={{.Host}}: exit status 7 (75.565991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 14:13:27.907606    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:14:08.869588    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m40.335802635s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-151529 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (133.04158ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-151529] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-151529
	    minikube start -p kubernetes-upgrade-151529 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1515292 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-151529 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 14:18:14.632909    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-151529 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.045146262s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-151529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-151529
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-151529: (2.673511024s)
--- PASS: TestKubernetesUpgrade (372.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.24s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4104704686 start -p missing-upgrade-662824 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4104704686 start -p missing-upgrade-662824 --memory=3072 --driver=docker  --container-runtime=docker: (58.809799751s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-662824
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-662824: (10.606007429s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-662824
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-662824 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 14:17:20.268506    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-662824 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.342059833s)
helpers_test.go:175: Cleaning up "missing-upgrade-662824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-662824
E0908 14:17:46.930529    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-662824: (2.34224179s)
--- PASS: TestMissingContainerUpgrade (123.24s)

                                                
                                    
x
+
TestPause/serial/Start (83.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-818649 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-818649 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m23.608515669s)
--- PASS: TestPause/serial/Start (83.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-113738 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-113738 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (101.953083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-113738] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-113738 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-113738 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.99284256s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-113738 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-818649 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-818649 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.646634162s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-113738 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0908 14:10:03.989357    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-113738 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (15.781074235s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-113738 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-113738 status -o json: exit status 2 (302.837619ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-113738","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-113738
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-113738: (1.883122793s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-113738 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-113738 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (11.084377636s)
--- PASS: TestNoKubernetes/serial/Start (11.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-113738 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-113738 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.461996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-113738
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-113738: (1.214263174s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-113738 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-113738 --driver=docker  --container-runtime=docker: (8.43077306s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-818649 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-818649 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-818649 --output=json --layout=cluster: exit status 2 (397.390091ms)

                                                
                                                
-- stdout --
	{"Name":"pause-818649","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-818649","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-818649 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-818649 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.49s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-818649 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-818649 --alsologtostderr -v=5: (2.485114953s)
--- PASS: TestPause/serial/DeletePaused (2.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-113738 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-113738 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.713155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-818649
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-818649: exit status 1 (18.52941ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-818649: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2407419895 start -p stopped-upgrade-087827 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2407419895 start -p stopped-upgrade-087827 --memory=3072 --vm-driver=docker  --container-runtime=docker: (44.997277605s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2407419895 -p stopped-upgrade-087827 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2407419895 -p stopped-upgrade-087827 stop: (10.973723732s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-087827 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 14:20:03.988686    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-087827 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.285239873s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m15.124376275s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-087827
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-087827: (1.181389908s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m13.055889888s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-414101 "pgrep -a kubelet"
I0908 14:20:24.772549    4120 config.go:182] Loaded profile config "auto-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-414101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5nvv4" [8552c51e-a49f-4e24-bb84-b97e28c1b444] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5nvv4" [8552c51e-a49f-4e24-bb84-b97e28c1b444] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004297025s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m25.46826041s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6zs8p" [2405ac10-c9c4-4f26-bff1-076686d60e9a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004999027s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-414101 "pgrep -a kubelet"
I0908 14:21:37.641372    4120 config.go:182] Loaded profile config "kindnet-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-414101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c72b8" [78c666ce-9266-4495-a9f7-5dc2485653dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c72b8" [78c666ce-9266-4495-a9f7-5dc2485653dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003746683s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0908 14:22:20.268091    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m14.66769245s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-nrcwv" [2362a27e-a73b-4455-aadd-325796adb870] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004667265s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-414101 "pgrep -a kubelet"
I0908 14:22:34.532624    4120 config.go:182] Loaded profile config "calico-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-414101 replace --force -f testdata/netcat-deployment.yaml
I0908 14:22:34.922074    4120 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qb9qd" [e539c2f2-32b9-476e-9fea-add80c56563d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qb9qd" [e539c2f2-32b9-476e-9fea-add80c56563d] Running
E0908 14:22:46.931167    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004021787s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (86.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m26.533736969s)
--- PASS: TestNetworkPlugins/group/false/Start (86.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-414101 "pgrep -a kubelet"
I0908 14:23:31.718869    4120 config.go:182] Loaded profile config "custom-flannel-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-414101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c4djv" [21a3c963-4f1e-4c6f-9a18-4e049eccd2fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c4djv" [21a3c963-4f1e-4c6f-9a18-4e049eccd2fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004638097s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m19.806824804s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-414101 "pgrep -a kubelet"
I0908 14:24:44.494129    4120 config.go:182] Loaded profile config "false-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-414101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gmmvw" [166f9114-c0f0-40ad-b854-efddfd1faf59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:24:47.064544    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gmmvw" [166f9114-c0f0-40ad-b854-efddfd1faf59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006836978s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0908 14:25:25.268621    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:25.275015    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:25.286449    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:25.307981    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:25.349366    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:25.430733    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:25.592443    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:25.914530    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:26.556787    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:27.838120    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m16.412211398s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-414101 "pgrep -a kubelet"
I0908 14:25:29.923863    4120 config.go:182] Loaded profile config "enable-default-cni-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-414101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9js6r" [bbfc854c-55fc-49df-a8cc-250734db1d64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:25:30.399638    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:25:35.520929    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-9js6r" [bbfc854c-55fc-49df-a8cc-250734db1d64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00563882s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0908 14:26:31.147791    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:31.154929    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:31.166284    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:31.187690    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:31.229009    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:31.310323    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:31.471806    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:31.793845    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:26:32.435157    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (55.328846206s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
E0908 14:26:33.716673    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kube-flannel-ds-ff8ng" [4e745334-283e-4d41-8d91-b44fcfa533d7] Running
E0908 14:26:36.278850    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002992068s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-414101 "pgrep -a kubelet"
I0908 14:26:40.140954    4120 config.go:182] Loaded profile config "flannel-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-414101 replace --force -f testdata/netcat-deployment.yaml
I0908 14:26:40.537523    4120 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cps2b" [300e249c-7094-460d-8ebd-eab8390f3ab6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:26:41.400697    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-cps2b" [300e249c-7094-460d-8ebd-eab8390f3ab6] Running
E0908 14:26:47.205613    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00391846s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-414101 exec deployment/netcat -- nslookup kubernetes.default
E0908 14:26:51.642702    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-414101 "pgrep -a kubelet"
I0908 14:27:03.983041    4120 config.go:182] Loaded profile config "bridge-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-414101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cdj5h" [32873964-7a7a-44ac-b11f-1368d0e81bdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cdj5h" [32873964-7a7a-44ac-b11f-1368d0e81bdb] Running
E0908 14:27:12.124885    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00522672s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (80.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0908 14:27:20.268485    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.149803    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.156206    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.173058    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.194530    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.236941    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.322917    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.490880    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:28.812725    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:29.454721    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:30.736106    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:33.297986    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-414101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m20.802273858s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (80.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-975105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0908 14:27:46.930949    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:48.773187    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:27:53.086145    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:09.127239    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:09.254660    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.033585    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.040335    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.051927    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.073295    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.114672    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.196365    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.358109    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:32.679937    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:33.321372    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:34.602661    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-975105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m2.130065508s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-414101 "pgrep -a kubelet"
E0908 14:28:37.164588    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0908 14:28:37.235515    4120 config.go:182] Loaded profile config "kubenet-414101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-414101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5v4cl" [47ac6df1-7670-4130-a59d-e8c22f852f4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 14:28:42.286026    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5v4cl" [47ac6df1-7670-4130-a59d-e8c22f852f4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003888908s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-975105 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [19a2cc13-2f30-464e-bf16-f7a6face4dd3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [19a2cc13-2f30-464e-bf16-f7a6face4dd3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003828978s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-975105 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-414101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-414101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)
E0908 14:34:24.581626    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-975105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-975105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.310782282s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-975105 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-975105 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-975105 --alsologtostderr -v=3: (11.28572727s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-975105 -n old-k8s-version-975105
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-975105 -n old-k8s-version-975105: exit status 7 (105.553535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-975105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (64.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-975105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-975105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m4.088471492s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-975105 -n old-k8s-version-975105
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (64.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-726865 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 14:29:13.010061    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:15.008035    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:44.803961    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:44.810241    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:44.821597    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:44.842881    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:44.884218    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:44.965662    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:45.127537    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:45.449104    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:46.090497    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:47.372317    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:49.934284    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:53.974892    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:29:55.055507    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:03.989063    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/addons-238540/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:05.297715    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-726865 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m0.99792875s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jr694" [b64ceb02-05c6-434d-9ece-b416c0430c79] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003350015s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-726865 create -f testdata/busybox.yaml
E0908 14:30:12.137914    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8e935fe8-d216-45e5-b6a1-57d31ec04fc4] Pending
helpers_test.go:352: "busybox" [8e935fe8-d216-45e5-b6a1-57d31ec04fc4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8e935fe8-d216-45e5-b6a1-57d31ec04fc4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003409422s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-726865 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jr694" [b64ceb02-05c6-434d-9ece-b416c0430c79] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004079862s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-975105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-726865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-726865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.007062657s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-726865 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-726865 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-726865 --alsologtostderr -v=3: (12.299320298s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-975105 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-975105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-975105 -n old-k8s-version-975105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-975105 -n old-k8s-version-975105: exit status 2 (322.03395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-975105 -n old-k8s-version-975105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-975105 -n old-k8s-version-975105: exit status 2 (331.826305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-975105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-975105 -n old-k8s-version-975105
E0908 14:30:25.268241    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-975105 -n old-k8s-version-975105
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-837881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 14:30:30.290214    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:30.296570    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:30.308018    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:30.329390    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:30.370755    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:30.452153    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:30.613670    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:30.935327    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:31.577370    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:32.858750    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-837881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (53.575131189s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-726865 -n no-preload-726865
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-726865 -n no-preload-726865: exit status 7 (100.903786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-726865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-726865 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 14:30:35.420714    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:40.542596    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:50.784794    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:30:52.968796    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/auto-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:06.742105    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:11.266026    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:15.896197    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-726865 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m1.02585403s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-726865 -n no-preload-726865
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-837881 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8de1382d-df18-4291-86a1-c0b82356d136] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8de1382d-df18-4291-86a1-c0b82356d136] Running
E0908 14:31:31.148515    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003373765s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-837881 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-837881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-837881 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-837881 --alsologtostderr -v=3
E0908 14:31:33.697047    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:33.703381    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:33.714725    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:33.736312    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:33.778311    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:33.859672    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:34.021551    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:34.343140    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:34.984902    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-837881 --alsologtostderr -v=3: (11.082330295s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g9ppn" [543cdd45-a419-42a0-9b03-86f781900f64] Running
E0908 14:31:36.266232    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:38.827773    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003584203s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g9ppn" [543cdd45-a419-42a0-9b03-86f781900f64] Running
E0908 14:31:43.949522    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003542305s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-726865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-837881 -n embed-certs-837881
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-837881 -n embed-certs-837881: exit status 7 (95.461136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-837881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (60.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-837881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-837881 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (59.736190901s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-837881 -n embed-certs-837881
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (60.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-726865 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-726865 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-726865 -n no-preload-726865
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-726865 -n no-preload-726865: exit status 2 (322.330726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-726865 -n no-preload-726865
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-726865 -n no-preload-726865: exit status 2 (329.226757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-726865 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-726865 -n no-preload-726865
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-726865 -n no-preload-726865
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-406434 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 14:31:54.191735    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:31:58.850446    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kindnet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:03.335756    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.321250    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.327616    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.338978    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.360349    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.401718    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.482931    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.644349    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:04.966160    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:05.608318    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:06.889576    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:09.451605    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:14.573222    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:14.673615    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:20.268727    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/functional-082913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:24.814661    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:28.149966    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:28.663417    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/false-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-406434 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m21.069696799s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-45w24" [7e1cc3c4-4fca-4da6-a745-6e99a994523c] Running
E0908 14:32:45.297378    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:32:46.930306    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/skaffold-797663/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002849867s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-45w24" [7e1cc3c4-4fca-4da6-a745-6e99a994523c] Running
E0908 14:32:55.635868    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003872849s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-837881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0908 14:32:55.980029    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/calico-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-837881 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-837881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-837881 -n embed-certs-837881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-837881 -n embed-certs-837881: exit status 2 (347.699054ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-837881 -n embed-certs-837881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-837881 -n embed-certs-837881: exit status 2 (334.439376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-837881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-837881 -n embed-certs-837881
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-837881 -n embed-certs-837881
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-233500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 14:33:14.155526    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/enable-default-cni-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-233500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (43.181176794s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-406434 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2a0f8129-e354-405e-bd24-9312f69745d3] Pending
helpers_test.go:352: "busybox" [2a0f8129-e354-405e-bd24-9312f69745d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2a0f8129-e354-405e-bd24-9312f69745d3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004708462s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-406434 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-406434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0908 14:33:26.259551    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/bridge-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-406434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.462035722s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-406434 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-406434 --alsologtostderr -v=3
E0908 14:33:32.033668    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:37.505187    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:37.511548    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:37.522884    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:37.544252    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:37.586408    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:37.669264    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-406434 --alsologtostderr -v=3: (11.13053049s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434: exit status 7 (110.588326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-406434 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0908 14:33:37.832647    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (36.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-406434 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 14:33:38.154589    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:38.796688    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:40.078981    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:42.640202    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:43.602603    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:43.608893    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:43.620236    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:43.641590    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:43.683279    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:43.767053    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:43.928469    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:44.249982    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:44.891894    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-406434 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (35.869975111s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (36.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-233500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0908 14:33:46.173354    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-233500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.572925616s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-233500 --alsologtostderr -v=3
E0908 14:33:47.761430    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:48.735032    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:53.857260    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-233500 --alsologtostderr -v=3: (9.328137378s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-233500 -n newest-cni-233500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-233500 -n newest-cni-233500: exit status 7 (144.836154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-233500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-233500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 14:33:58.003287    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:33:59.738057    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/custom-flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:34:04.099379    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/old-k8s-version-975105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-233500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (20.970114829s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-233500 -n newest-cni-233500
E0908 14:34:17.557505    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/flannel-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6vmkc" [9b722dcb-4d74-4b48-a2ba-47df7ce27e60] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004379711s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-233500 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-233500 --alsologtostderr -v=1
E0908 14:34:18.485238    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-2320/.minikube/profiles/kubenet-414101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-233500 -n newest-cni-233500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-233500 -n newest-cni-233500: exit status 2 (360.644632ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-233500 -n newest-cni-233500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-233500 -n newest-cni-233500: exit status 2 (333.66572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-233500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-233500 -n newest-cni-233500
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-233500 -n newest-cni-233500
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6vmkc" [9b722dcb-4d74-4b48-a2ba-47df7ce27e60] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004054926s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-406434 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-406434 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-406434 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434: exit status 2 (315.44204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434: exit status 2 (312.346879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-406434 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-406434 -n default-k8s-diff-port-406434
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    

Test skip (26/347)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-208047 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-208047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-208047
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-414101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-414101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-414101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-414101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414101"

                                                
                                                
----------------------- debugLogs end: cilium-414101 [took: 5.331596917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-414101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-414101
--- SKIP: TestNetworkPlugins/group/cilium (5.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-416597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-416597
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard