Test Report: Docker_Linux_docker_arm64 21643

                    
                      cc42fd2f8cec8fa883ff6f7397a2f6141c487062:2025-10-02:41725
                    
                

Test fail (1/347)

Order failed test Duration
258 TestScheduledStopUnix 42.09
x
+
TestScheduledStopUnix (42.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-273808 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-273808 --memory=3072 --driver=docker  --container-runtime=docker: (37.273196733s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-273808 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-273808 -n scheduled-stop-273808
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-273808 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 1489330 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-10-02 07:01:06.923295614 +0000 UTC m=+2428.094611752
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-273808
helpers_test.go:243: (dbg) docker inspect scheduled-stop-273808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "385edf8f269ac45cb2c2712ce9c6ea5c659c2bc387459b09df2369c48cc3968d",
	        "Created": "2025-10-02T07:00:34.162399099Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1486522,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:00:34.229066637Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/385edf8f269ac45cb2c2712ce9c6ea5c659c2bc387459b09df2369c48cc3968d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/385edf8f269ac45cb2c2712ce9c6ea5c659c2bc387459b09df2369c48cc3968d/hostname",
	        "HostsPath": "/var/lib/docker/containers/385edf8f269ac45cb2c2712ce9c6ea5c659c2bc387459b09df2369c48cc3968d/hosts",
	        "LogPath": "/var/lib/docker/containers/385edf8f269ac45cb2c2712ce9c6ea5c659c2bc387459b09df2369c48cc3968d/385edf8f269ac45cb2c2712ce9c6ea5c659c2bc387459b09df2369c48cc3968d-json.log",
	        "Name": "/scheduled-stop-273808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-273808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-273808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "385edf8f269ac45cb2c2712ce9c6ea5c659c2bc387459b09df2369c48cc3968d",
	                "LowerDir": "/var/lib/docker/overlay2/70c00a18e9d10e89e06a785f0f9373c5b0cc82248bb423a3be7479c1f1b6e631-init/diff:/var/lib/docker/overlay2/e75aeb731217e4929bbe543c44bed11f3df1ccbcd034bec040802dc1e2cd58a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70c00a18e9d10e89e06a785f0f9373c5b0cc82248bb423a3be7479c1f1b6e631/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70c00a18e9d10e89e06a785f0f9373c5b0cc82248bb423a3be7479c1f1b6e631/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70c00a18e9d10e89e06a785f0f9373c5b0cc82248bb423a3be7479c1f1b6e631/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-273808",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-273808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-273808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-273808",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-273808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e3ec3cfbf696815c5fb01758e4e4cc0b22a19e82f035cd941dc099850290912",
	            "SandboxKey": "/var/run/docker/netns/3e3ec3cfbf69",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34154"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34155"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34158"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34156"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34157"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-273808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:8b:b0:35:16:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "64f3d9b6c1db52ad9baa5e5becbb7016464c592c46a36ca76947e3bc955715f4",
	                    "EndpointID": "8d47ae71d2849adfb76b01b92ff6188d83c96c08ec69edf629f8d5d3697f50f7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-273808",
	                        "385edf8f269a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-273808 -n scheduled-stop-273808
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-273808 logs -n 25
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-355238                                                                                                                                         │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ start   │ -p multinode-355238 --wait=true -v=5 --alsologtostderr                                                                                                      │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:56 UTC │
	│ node    │ list -p multinode-355238                                                                                                                                    │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │                     │
	│ node    │ multinode-355238 node delete m03                                                                                                                            │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │ 02 Oct 25 06:56 UTC │
	│ stop    │ multinode-355238 stop                                                                                                                                       │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │ 02 Oct 25 06:56 UTC │
	│ start   │ -p multinode-355238 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker                                                          │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │ 02 Oct 25 06:57 UTC │
	│ node    │ list -p multinode-355238                                                                                                                                    │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ start   │ -p multinode-355238-m02 --driver=docker  --container-runtime=docker                                                                                         │ multinode-355238-m02  │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ start   │ -p multinode-355238-m03 --driver=docker  --container-runtime=docker                                                                                         │ multinode-355238-m03  │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:58 UTC │
	│ node    │ add -p multinode-355238                                                                                                                                     │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:58 UTC │                     │
	│ delete  │ -p multinode-355238-m03                                                                                                                                     │ multinode-355238-m03  │ jenkins │ v1.37.0 │ 02 Oct 25 06:58 UTC │ 02 Oct 25 06:58 UTC │
	│ delete  │ -p multinode-355238                                                                                                                                         │ multinode-355238      │ jenkins │ v1.37.0 │ 02 Oct 25 06:58 UTC │ 02 Oct 25 06:58 UTC │
	│ start   │ -p test-preload-876119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0 │ test-preload-876119   │ jenkins │ v1.37.0 │ 02 Oct 25 06:58 UTC │ 02 Oct 25 06:59 UTC │
	│ image   │ test-preload-876119 image pull gcr.io/k8s-minikube/busybox                                                                                                  │ test-preload-876119   │ jenkins │ v1.37.0 │ 02 Oct 25 06:59 UTC │ 02 Oct 25 06:59 UTC │
	│ stop    │ -p test-preload-876119                                                                                                                                      │ test-preload-876119   │ jenkins │ v1.37.0 │ 02 Oct 25 06:59 UTC │ 02 Oct 25 06:59 UTC │
	│ start   │ -p test-preload-876119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker                                         │ test-preload-876119   │ jenkins │ v1.37.0 │ 02 Oct 25 06:59 UTC │ 02 Oct 25 07:00 UTC │
	│ image   │ test-preload-876119 image list                                                                                                                              │ test-preload-876119   │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:00 UTC │
	│ delete  │ -p test-preload-876119                                                                                                                                      │ test-preload-876119   │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:00 UTC │
	│ start   │ -p scheduled-stop-273808 --memory=3072 --driver=docker  --container-runtime=docker                                                                          │ scheduled-stop-273808 │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:01 UTC │
	│ stop    │ -p scheduled-stop-273808 --schedule 5m                                                                                                                      │ scheduled-stop-273808 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ stop    │ -p scheduled-stop-273808 --schedule 5m                                                                                                                      │ scheduled-stop-273808 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ stop    │ -p scheduled-stop-273808 --schedule 5m                                                                                                                      │ scheduled-stop-273808 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ stop    │ -p scheduled-stop-273808 --schedule 15s                                                                                                                     │ scheduled-stop-273808 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ stop    │ -p scheduled-stop-273808 --schedule 15s                                                                                                                     │ scheduled-stop-273808 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ stop    │ -p scheduled-stop-273808 --schedule 15s                                                                                                                     │ scheduled-stop-273808 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:00:29
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:00:29.178951 1486138 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:00:29.179069 1486138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:00:29.179072 1486138 out.go:374] Setting ErrFile to fd 2...
	I1002 07:00:29.179076 1486138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:00:29.179316 1486138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 07:00:29.179709 1486138 out.go:368] Setting JSON to false
	I1002 07:00:29.180575 1486138 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24164,"bootTime":1759364266,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 07:00:29.180631 1486138 start.go:140] virtualization:  
	I1002 07:00:29.184686 1486138 out.go:179] * [scheduled-stop-273808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:00:29.189271 1486138 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:00:29.189383 1486138 notify.go:220] Checking for updates...
	I1002 07:00:29.196136 1486138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:00:29.199445 1486138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	I1002 07:00:29.202520 1486138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	I1002 07:00:29.205703 1486138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:00:29.208749 1486138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:00:29.212129 1486138 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:00:29.238829 1486138 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:00:29.238949 1486138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:00:29.298181 1486138 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:00:29.288918219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:00:29.298277 1486138 docker.go:318] overlay module found
	I1002 07:00:29.301693 1486138 out.go:179] * Using the docker driver based on user configuration
	I1002 07:00:29.304649 1486138 start.go:304] selected driver: docker
	I1002 07:00:29.304659 1486138 start.go:924] validating driver "docker" against <nil>
	I1002 07:00:29.304671 1486138 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:00:29.305408 1486138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:00:29.362568 1486138 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:00:29.35351188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:00:29.362711 1486138 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:00:29.362926 1486138 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:00:29.365922 1486138 out.go:179] * Using Docker driver with root privileges
	I1002 07:00:29.368884 1486138 cni.go:84] Creating CNI manager for ""
	I1002 07:00:29.368956 1486138 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 07:00:29.368964 1486138 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 07:00:29.369043 1486138 start.go:348] cluster config:
	{Name:scheduled-stop-273808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-273808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:00:29.372253 1486138 out.go:179] * Starting "scheduled-stop-273808" primary control-plane node in "scheduled-stop-273808" cluster
	I1002 07:00:29.375133 1486138 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 07:00:29.378156 1486138 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:00:29.380965 1486138 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 07:00:29.381007 1486138 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 07:00:29.381015 1486138 cache.go:58] Caching tarball of preloaded images
	I1002 07:00:29.381112 1486138 preload.go:233] Found /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 07:00:29.381120 1486138 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 07:00:29.381435 1486138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/config.json ...
	I1002 07:00:29.381452 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/config.json: {Name:mk5bbe684956a2b7bb552b021092cdae80a8efc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:29.381625 1486138 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:00:29.400406 1486138 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:00:29.400417 1486138 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:00:29.400444 1486138 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:00:29.400469 1486138 start.go:360] acquireMachinesLock for scheduled-stop-273808: {Name:mkda6f7c4d465507031dbfae99ef6247f3fd0af5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:00:29.400582 1486138 start.go:364] duration metric: took 99.362µs to acquireMachinesLock for "scheduled-stop-273808"
	I1002 07:00:29.400606 1486138 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-273808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-273808 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 07:00:29.400668 1486138 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:00:29.404119 1486138 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 07:00:29.404345 1486138 start.go:159] libmachine.API.Create for "scheduled-stop-273808" (driver="docker")
	I1002 07:00:29.404388 1486138 client.go:168] LocalClient.Create starting
	I1002 07:00:29.404483 1486138 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca.pem
	I1002 07:00:29.404516 1486138 main.go:141] libmachine: Decoding PEM data...
	I1002 07:00:29.404528 1486138 main.go:141] libmachine: Parsing certificate...
	I1002 07:00:29.404575 1486138 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/cert.pem
	I1002 07:00:29.404594 1486138 main.go:141] libmachine: Decoding PEM data...
	I1002 07:00:29.404602 1486138 main.go:141] libmachine: Parsing certificate...
	I1002 07:00:29.404944 1486138 cli_runner.go:164] Run: docker network inspect scheduled-stop-273808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:00:29.420944 1486138 cli_runner.go:211] docker network inspect scheduled-stop-273808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:00:29.421015 1486138 network_create.go:284] running [docker network inspect scheduled-stop-273808] to gather additional debugging logs...
	I1002 07:00:29.421030 1486138 cli_runner.go:164] Run: docker network inspect scheduled-stop-273808
	W1002 07:00:29.436771 1486138 cli_runner.go:211] docker network inspect scheduled-stop-273808 returned with exit code 1
	I1002 07:00:29.436791 1486138 network_create.go:287] error running [docker network inspect scheduled-stop-273808]: docker network inspect scheduled-stop-273808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-273808 not found
	I1002 07:00:29.436818 1486138 network_create.go:289] output of [docker network inspect scheduled-stop-273808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-273808 not found
	
	** /stderr **
	I1002 07:00:29.436914 1486138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:00:29.453399 1486138 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-200b88fe63d3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:19:92:75:0d} reservation:<nil>}
	I1002 07:00:29.453686 1486138 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e62f03852c34 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:41:83:14:3f:fc} reservation:<nil>}
	I1002 07:00:29.453942 1486138 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-52024f7d4aea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:75:f8:11:6c:e1} reservation:<nil>}
	I1002 07:00:29.454265 1486138 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001951d50}
	I1002 07:00:29.454279 1486138 network_create.go:124] attempt to create docker network scheduled-stop-273808 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 07:00:29.454334 1486138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-273808 scheduled-stop-273808
	I1002 07:00:29.516274 1486138 network_create.go:108] docker network scheduled-stop-273808 192.168.76.0/24 created
	I1002 07:00:29.516293 1486138 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-273808" container
	I1002 07:00:29.516381 1486138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:00:29.530734 1486138 cli_runner.go:164] Run: docker volume create scheduled-stop-273808 --label name.minikube.sigs.k8s.io=scheduled-stop-273808 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:00:29.548478 1486138 oci.go:103] Successfully created a docker volume scheduled-stop-273808
	I1002 07:00:29.548578 1486138 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-273808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-273808 --entrypoint /usr/bin/test -v scheduled-stop-273808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:00:30.066861 1486138 oci.go:107] Successfully prepared a docker volume scheduled-stop-273808
	I1002 07:00:30.067130 1486138 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 07:00:30.067154 1486138 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:00:30.067237 1486138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-273808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:00:34.090325 1486138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-273808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.023043428s)
	I1002 07:00:34.090347 1486138 kic.go:203] duration metric: took 4.023190387s to extract preloaded images to volume ...
	W1002 07:00:34.090486 1486138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 07:00:34.090603 1486138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:00:34.147926 1486138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-273808 --name scheduled-stop-273808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-273808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-273808 --network scheduled-stop-273808 --ip 192.168.76.2 --volume scheduled-stop-273808:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:00:34.444430 1486138 cli_runner.go:164] Run: docker container inspect scheduled-stop-273808 --format={{.State.Running}}
	I1002 07:00:34.468608 1486138 cli_runner.go:164] Run: docker container inspect scheduled-stop-273808 --format={{.State.Status}}
	I1002 07:00:34.494284 1486138 cli_runner.go:164] Run: docker exec scheduled-stop-273808 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:00:34.546787 1486138 oci.go:144] the created container "scheduled-stop-273808" has a running status.
	I1002 07:00:34.546825 1486138 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa...
	I1002 07:00:34.636229 1486138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:00:34.661798 1486138 cli_runner.go:164] Run: docker container inspect scheduled-stop-273808 --format={{.State.Status}}
	I1002 07:00:34.684676 1486138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:00:34.684687 1486138 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-273808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:00:34.736196 1486138 cli_runner.go:164] Run: docker container inspect scheduled-stop-273808 --format={{.State.Status}}
	I1002 07:00:34.756667 1486138 machine.go:93] provisionDockerMachine start ...
	I1002 07:00:34.756748 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:34.793705 1486138 main.go:141] libmachine: Using SSH client type: native
	I1002 07:00:34.794044 1486138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34154 <nil> <nil>}
	I1002 07:00:34.794051 1486138 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:00:34.794852 1486138 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:34154: read: connection reset by peer
	I1002 07:00:37.927912 1486138 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-273808
	
	I1002 07:00:37.927926 1486138 ubuntu.go:182] provisioning hostname "scheduled-stop-273808"
	I1002 07:00:37.927997 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:37.945637 1486138 main.go:141] libmachine: Using SSH client type: native
	I1002 07:00:37.945945 1486138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34154 <nil> <nil>}
	I1002 07:00:37.945954 1486138 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-273808 && echo "scheduled-stop-273808" | sudo tee /etc/hostname
	I1002 07:00:38.099171 1486138 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-273808
	
	I1002 07:00:38.099244 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:38.118974 1486138 main.go:141] libmachine: Using SSH client type: native
	I1002 07:00:38.119296 1486138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34154 <nil> <nil>}
	I1002 07:00:38.119319 1486138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-273808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-273808/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-273808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:00:38.252780 1486138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:00:38.252796 1486138 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-1281649/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-1281649/.minikube}
	I1002 07:00:38.252815 1486138 ubuntu.go:190] setting up certificates
	I1002 07:00:38.252822 1486138 provision.go:84] configureAuth start
	I1002 07:00:38.252882 1486138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-273808
	I1002 07:00:38.271176 1486138 provision.go:143] copyHostCerts
	I1002 07:00:38.271239 1486138 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-1281649/.minikube/cert.pem, removing ...
	I1002 07:00:38.271246 1486138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-1281649/.minikube/cert.pem
	I1002 07:00:38.271326 1486138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-1281649/.minikube/cert.pem (1123 bytes)
	I1002 07:00:38.271420 1486138 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-1281649/.minikube/key.pem, removing ...
	I1002 07:00:38.271424 1486138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-1281649/.minikube/key.pem
	I1002 07:00:38.271448 1486138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-1281649/.minikube/key.pem (1675 bytes)
	I1002 07:00:38.271499 1486138 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.pem, removing ...
	I1002 07:00:38.271502 1486138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.pem
	I1002 07:00:38.271523 1486138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.pem (1082 bytes)
	I1002 07:00:38.271567 1486138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-273808 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-273808]
	I1002 07:00:38.428487 1486138 provision.go:177] copyRemoteCerts
	I1002 07:00:38.428545 1486138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:00:38.428582 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:38.449290 1486138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34154 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa Username:docker}
	I1002 07:00:38.551781 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 07:00:38.568837 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:00:38.585918 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:00:38.603108 1486138 provision.go:87] duration metric: took 350.262649ms to configureAuth
	I1002 07:00:38.603139 1486138 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:00:38.603307 1486138 config.go:182] Loaded profile config "scheduled-stop-273808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 07:00:38.603370 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:38.620240 1486138 main.go:141] libmachine: Using SSH client type: native
	I1002 07:00:38.620549 1486138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34154 <nil> <nil>}
	I1002 07:00:38.620556 1486138 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 07:00:38.752885 1486138 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 07:00:38.752896 1486138 ubuntu.go:71] root file system type: overlay
	I1002 07:00:38.753025 1486138 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 07:00:38.753091 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:38.771401 1486138 main.go:141] libmachine: Using SSH client type: native
	I1002 07:00:38.771706 1486138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34154 <nil> <nil>}
	I1002 07:00:38.771785 1486138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 07:00:38.913533 1486138 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 07:00:38.913620 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:38.935130 1486138 main.go:141] libmachine: Using SSH client type: native
	I1002 07:00:38.935448 1486138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34154 <nil> <nil>}
	I1002 07:00:38.935464 1486138 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 07:00:39.880364 1486138 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-02 07:00:38.908454311 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1002 07:00:39.880380 1486138 machine.go:96] duration metric: took 5.12370194s to provisionDockerMachine
	I1002 07:00:39.880391 1486138 client.go:171] duration metric: took 10.475997745s to LocalClient.Create
	I1002 07:00:39.880408 1486138 start.go:167] duration metric: took 10.476059594s to libmachine.API.Create "scheduled-stop-273808"
	I1002 07:00:39.880414 1486138 start.go:293] postStartSetup for "scheduled-stop-273808" (driver="docker")
	I1002 07:00:39.880423 1486138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:00:39.880495 1486138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:00:39.880541 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:39.898903 1486138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34154 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa Username:docker}
	I1002 07:00:39.996453 1486138 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:00:40.000452 1486138 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:00:40.000472 1486138 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:00:40.000484 1486138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-1281649/.minikube/addons for local assets ...
	I1002 07:00:40.000553 1486138 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-1281649/.minikube/files for local assets ...
	I1002 07:00:40.000636 1486138 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-1281649/.minikube/files/etc/ssl/certs/12835082.pem -> 12835082.pem in /etc/ssl/certs
	I1002 07:00:40.000739 1486138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:00:40.058756 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/files/etc/ssl/certs/12835082.pem --> /etc/ssl/certs/12835082.pem (1708 bytes)
	I1002 07:00:40.079308 1486138 start.go:296] duration metric: took 198.87888ms for postStartSetup
	I1002 07:00:40.079706 1486138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-273808
	I1002 07:00:40.100591 1486138 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/config.json ...
	I1002 07:00:40.100895 1486138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:00:40.100937 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:40.124879 1486138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34154 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa Username:docker}
	I1002 07:00:40.221553 1486138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:00:40.227049 1486138 start.go:128] duration metric: took 10.826366919s to createHost
	I1002 07:00:40.227063 1486138 start.go:83] releasing machines lock for "scheduled-stop-273808", held for 10.826473361s
	I1002 07:00:40.227146 1486138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-273808
	I1002 07:00:40.244685 1486138 ssh_runner.go:195] Run: cat /version.json
	I1002 07:00:40.244730 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:40.244769 1486138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:00:40.244817 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:00:40.262783 1486138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34154 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa Username:docker}
	I1002 07:00:40.265401 1486138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34154 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa Username:docker}
	I1002 07:00:40.452350 1486138 ssh_runner.go:195] Run: systemctl --version
	I1002 07:00:40.459065 1486138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:00:40.463682 1486138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:00:40.463747 1486138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:00:40.492800 1486138 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 07:00:40.492817 1486138 start.go:495] detecting cgroup driver to use...
	I1002 07:00:40.492850 1486138 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:00:40.492947 1486138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:00:40.507710 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 07:00:40.516872 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 07:00:40.525991 1486138 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 07:00:40.526054 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 07:00:40.535019 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:00:40.543889 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 07:00:40.553079 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:00:40.562407 1486138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:00:40.571018 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 07:00:40.580198 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 07:00:40.589765 1486138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 07:00:40.598813 1486138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:00:40.606214 1486138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:00:40.613592 1486138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:00:40.728676 1486138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 07:00:40.825040 1486138 start.go:495] detecting cgroup driver to use...
	I1002 07:00:40.825078 1486138 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:00:40.825147 1486138 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 07:00:40.840636 1486138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:00:40.854073 1486138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:00:40.877051 1486138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:00:40.890077 1486138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 07:00:40.903312 1486138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:00:40.917515 1486138 ssh_runner.go:195] Run: which cri-dockerd
	I1002 07:00:40.921206 1486138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 07:00:40.929210 1486138 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 07:00:40.943074 1486138 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 07:00:41.054613 1486138 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 07:00:41.184329 1486138 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 07:00:41.184431 1486138 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 07:00:41.199309 1486138 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 07:00:41.213443 1486138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:00:41.340309 1486138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 07:00:41.721341 1486138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:00:41.735221 1486138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 07:00:41.749348 1486138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 07:00:41.763203 1486138 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 07:00:41.889514 1486138 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 07:00:42.018034 1486138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:00:42.152173 1486138 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 07:00:42.173643 1486138 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 07:00:42.190441 1486138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:00:42.322450 1486138 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 07:00:42.394467 1486138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 07:00:42.410296 1486138 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 07:00:42.410361 1486138 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 07:00:42.414772 1486138 start.go:563] Will wait 60s for crictl version
	I1002 07:00:42.414902 1486138 ssh_runner.go:195] Run: which crictl
	I1002 07:00:42.419015 1486138 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:00:42.444787 1486138 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 07:00:42.444857 1486138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 07:00:42.472493 1486138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 07:00:42.506380 1486138 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 07:00:42.506469 1486138 cli_runner.go:164] Run: docker network inspect scheduled-stop-273808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:00:42.523082 1486138 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 07:00:42.527128 1486138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:00:42.537123 1486138 kubeadm.go:883] updating cluster {Name:scheduled-stop-273808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-273808 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:00:42.537218 1486138 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 07:00:42.537270 1486138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 07:00:42.558093 1486138 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 07:00:42.558107 1486138 docker.go:621] Images already preloaded, skipping extraction
	I1002 07:00:42.558181 1486138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 07:00:42.576358 1486138 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 07:00:42.576372 1486138 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:00:42.576380 1486138 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 docker true true} ...
	I1002 07:00:42.576472 1486138 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-273808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-273808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:00:42.576537 1486138 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 07:00:42.629523 1486138 cni.go:84] Creating CNI manager for ""
	I1002 07:00:42.629537 1486138 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 07:00:42.629551 1486138 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:00:42.629571 1486138 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-273808 NodeName:scheduled-stop-273808 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:00:42.629694 1486138 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "scheduled-stop-273808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:00:42.629758 1486138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:00:42.637676 1486138 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:00:42.637749 1486138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:00:42.645984 1486138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1002 07:00:42.658748 1486138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:00:42.671461 1486138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1002 07:00:42.685392 1486138 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:00:42.689160 1486138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:00:42.698746 1486138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:00:42.810554 1486138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:00:42.826654 1486138 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808 for IP: 192.168.76.2
	I1002 07:00:42.826665 1486138 certs.go:195] generating shared ca certs ...
	I1002 07:00:42.826680 1486138 certs.go:227] acquiring lock for ca certs: {Name:mkbfd31f90356176653bc4b00cb70c47296e672d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:42.826818 1486138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.key
	I1002 07:00:42.826868 1486138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/proxy-client-ca.key
	I1002 07:00:42.826875 1486138 certs.go:257] generating profile certs ...
	I1002 07:00:42.826931 1486138 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/client.key
	I1002 07:00:42.826948 1486138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/client.crt with IP's: []
	I1002 07:00:43.828499 1486138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/client.crt ...
	I1002 07:00:43.828516 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/client.crt: {Name:mkad8bf6bf5df4956e6e276d92e2cf162291b199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:43.828731 1486138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/client.key ...
	I1002 07:00:43.828739 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/client.key: {Name:mk9b893fbf8fc5cd29a71a087c44b1227daeedaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:43.828835 1486138 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.key.f78285d2
	I1002 07:00:43.828847 1486138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.crt.f78285d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 07:00:44.576670 1486138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.crt.f78285d2 ...
	I1002 07:00:44.576692 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.crt.f78285d2: {Name:mk0c18c587ff003bf548ec4f72e78a054f31b17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:44.576887 1486138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.key.f78285d2 ...
	I1002 07:00:44.576899 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.key.f78285d2: {Name:mk43a37e7a10824e4f0b540bb3e0fef5a36bd2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:44.576993 1486138 certs.go:382] copying /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.crt.f78285d2 -> /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.crt
	I1002 07:00:44.577073 1486138 certs.go:386] copying /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.key.f78285d2 -> /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.key
	I1002 07:00:44.577132 1486138 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.key
	I1002 07:00:44.577144 1486138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.crt with IP's: []
	I1002 07:00:45.086232 1486138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.crt ...
	I1002 07:00:45.086250 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.crt: {Name:mk8af19aa416a487eef66ba6b52167a9e3ae4eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:45.086480 1486138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.key ...
	I1002 07:00:45.086490 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.key: {Name:mk7b4cc0f5f596b1a133130becc7235ba6aced70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:00:45.086728 1486138 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/1283508.pem (1338 bytes)
	W1002 07:00:45.086774 1486138 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/1283508_empty.pem, impossibly tiny 0 bytes
	I1002 07:00:45.086784 1486138 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:00:45.086820 1486138 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:00:45.086855 1486138 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:00:45.086891 1486138 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/key.pem (1675 bytes)
	I1002 07:00:45.086953 1486138 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-1281649/.minikube/files/etc/ssl/certs/12835082.pem (1708 bytes)
	I1002 07:00:45.087624 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:00:45.119920 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 07:00:45.143527 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:00:45.171085 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 07:00:45.206750 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 07:00:45.237123 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:00:45.277288 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:00:45.310323 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/scheduled-stop-273808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:00:45.344251 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/certs/1283508.pem --> /usr/share/ca-certificates/1283508.pem (1338 bytes)
	I1002 07:00:45.378775 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/files/etc/ssl/certs/12835082.pem --> /usr/share/ca-certificates/12835082.pem (1708 bytes)
	I1002 07:00:45.411523 1486138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:00:45.435470 1486138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:00:45.450252 1486138 ssh_runner.go:195] Run: openssl version
	I1002 07:00:45.456970 1486138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1283508.pem && ln -fs /usr/share/ca-certificates/1283508.pem /etc/ssl/certs/1283508.pem"
	I1002 07:00:45.466194 1486138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1283508.pem
	I1002 07:00:45.470471 1486138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:27 /usr/share/ca-certificates/1283508.pem
	I1002 07:00:45.470536 1486138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1283508.pem
	I1002 07:00:45.513901 1486138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1283508.pem /etc/ssl/certs/51391683.0"
	I1002 07:00:45.522560 1486138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12835082.pem && ln -fs /usr/share/ca-certificates/12835082.pem /etc/ssl/certs/12835082.pem"
	I1002 07:00:45.531247 1486138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12835082.pem
	I1002 07:00:45.535042 1486138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:27 /usr/share/ca-certificates/12835082.pem
	I1002 07:00:45.535103 1486138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12835082.pem
	I1002 07:00:45.577706 1486138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12835082.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:00:45.586703 1486138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:00:45.595566 1486138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:00:45.599979 1486138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:21 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:00:45.600041 1486138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:00:45.641670 1486138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:00:45.649898 1486138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:00:45.653400 1486138 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:00:45.653440 1486138 kubeadm.go:400] StartCluster: {Name:scheduled-stop-273808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-273808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:00:45.653557 1486138 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 07:00:45.672345 1486138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:00:45.680753 1486138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:00:45.688489 1486138 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:00:45.688567 1486138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:00:45.696501 1486138 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:00:45.696509 1486138 kubeadm.go:157] found existing configuration files:
	
	I1002 07:00:45.696566 1486138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:00:45.704462 1486138 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:00:45.704525 1486138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:00:45.711977 1486138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:00:45.720070 1486138 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:00:45.720125 1486138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:00:45.727531 1486138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:00:45.735181 1486138 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:00:45.735241 1486138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:00:45.742426 1486138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:00:45.749962 1486138 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:00:45.750018 1486138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:00:45.757466 1486138 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:00:45.835774 1486138 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:00:45.839286 1486138 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:00:45.866193 1486138 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:00:45.866259 1486138 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:00:45.866295 1486138 kubeadm.go:318] OS: Linux
	I1002 07:00:45.866342 1486138 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:00:45.866391 1486138 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:00:45.866439 1486138 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:00:45.866488 1486138 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:00:45.866538 1486138 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:00:45.866587 1486138 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:00:45.866634 1486138 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:00:45.866683 1486138 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:00:45.866742 1486138 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:00:45.937988 1486138 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:00:45.938108 1486138 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:00:45.938210 1486138 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:00:45.952373 1486138 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:00:45.958935 1486138 out.go:252]   - Generating certificates and keys ...
	I1002 07:00:45.959049 1486138 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:00:45.959124 1486138 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:00:46.905997 1486138 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:00:47.876546 1486138 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:00:48.437150 1486138 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:00:48.813811 1486138 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:00:49.578289 1486138 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:00:49.578576 1486138 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-273808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:00:50.161165 1486138 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:00:50.161479 1486138 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-273808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:00:50.379788 1486138 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:00:50.978125 1486138 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:00:51.357695 1486138 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:00:51.357943 1486138 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:00:52.428268 1486138 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:00:53.334705 1486138 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:00:54.493494 1486138 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:00:54.580501 1486138 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:00:55.220905 1486138 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:00:55.221667 1486138 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:00:55.226007 1486138 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:00:55.229646 1486138 out.go:252]   - Booting up control plane ...
	I1002 07:00:55.229765 1486138 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:00:55.229862 1486138 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:00:55.230347 1486138 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:00:55.249081 1486138 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:00:55.249193 1486138 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:00:55.258310 1486138 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:00:55.258595 1486138 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:00:55.258638 1486138 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:00:55.409908 1486138 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:00:55.410026 1486138 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:00:56.910786 1486138 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501189538s
	I1002 07:00:56.917503 1486138 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:00:56.917605 1486138 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 07:00:56.917947 1486138 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:00:56.918032 1486138 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:01:00.796981 1486138 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.878962763s
	I1002 07:01:02.148716 1486138 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.231228845s
	I1002 07:01:03.419066 1486138 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501311437s
	I1002 07:01:03.438401 1486138 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 07:01:03.464316 1486138 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 07:01:03.478659 1486138 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 07:01:03.478864 1486138 kubeadm.go:318] [mark-control-plane] Marking the node scheduled-stop-273808 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 07:01:03.490461 1486138 kubeadm.go:318] [bootstrap-token] Using token: 5a9p6v.4zs45lxm4jmuczna
	I1002 07:01:03.493359 1486138 out.go:252]   - Configuring RBAC rules ...
	I1002 07:01:03.493499 1486138 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 07:01:03.497673 1486138 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 07:01:03.509512 1486138 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 07:01:03.514185 1486138 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 07:01:03.520276 1486138 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 07:01:03.524932 1486138 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 07:01:03.828719 1486138 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 07:01:04.256393 1486138 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 07:01:04.826510 1486138 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 07:01:04.827539 1486138 kubeadm.go:318] 
	I1002 07:01:04.827607 1486138 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 07:01:04.827611 1486138 kubeadm.go:318] 
	I1002 07:01:04.827691 1486138 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 07:01:04.827695 1486138 kubeadm.go:318] 
	I1002 07:01:04.827721 1486138 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 07:01:04.827782 1486138 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 07:01:04.827834 1486138 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 07:01:04.827838 1486138 kubeadm.go:318] 
	I1002 07:01:04.827894 1486138 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 07:01:04.827897 1486138 kubeadm.go:318] 
	I1002 07:01:04.827946 1486138 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 07:01:04.827950 1486138 kubeadm.go:318] 
	I1002 07:01:04.828003 1486138 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 07:01:04.828127 1486138 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 07:01:04.828198 1486138 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 07:01:04.828202 1486138 kubeadm.go:318] 
	I1002 07:01:04.828289 1486138 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 07:01:04.828368 1486138 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 07:01:04.828372 1486138 kubeadm.go:318] 
	I1002 07:01:04.828459 1486138 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5a9p6v.4zs45lxm4jmuczna \
	I1002 07:01:04.828565 1486138 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08445900f37e2db8182acd70d8e2a7711d9c9dbd967d5e887a6f3a131bca817f \
	I1002 07:01:04.828585 1486138 kubeadm.go:318] 	--control-plane 
	I1002 07:01:04.828589 1486138 kubeadm.go:318] 
	I1002 07:01:04.828676 1486138 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 07:01:04.828680 1486138 kubeadm.go:318] 
	I1002 07:01:04.828765 1486138 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5a9p6v.4zs45lxm4jmuczna \
	I1002 07:01:04.828871 1486138 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08445900f37e2db8182acd70d8e2a7711d9c9dbd967d5e887a6f3a131bca817f 
	I1002 07:01:04.832787 1486138 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:01:04.833032 1486138 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:01:04.833144 1486138 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:01:04.833172 1486138 cni.go:84] Creating CNI manager for ""
	I1002 07:01:04.833184 1486138 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 07:01:04.836425 1486138 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 07:01:04.839280 1486138 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 07:01:04.846926 1486138 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 07:01:04.861914 1486138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 07:01:04.862000 1486138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 07:01:04.862035 1486138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-273808 minikube.k8s.io/updated_at=2025_10_02T07_01_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=scheduled-stop-273808 minikube.k8s.io/primary=true
	I1002 07:01:05.014014 1486138 kubeadm.go:1113] duration metric: took 152.073025ms to wait for elevateKubeSystemPrivileges
	I1002 07:01:05.014049 1486138 ops.go:34] apiserver oom_adj: -16
	I1002 07:01:05.014057 1486138 kubeadm.go:402] duration metric: took 19.360620014s to StartCluster
	I1002 07:01:05.014072 1486138 settings.go:142] acquiring lock: {Name:mk549d445ee28d1b957693d6dbb26b038b2321bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:01:05.014139 1486138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-1281649/kubeconfig
	I1002 07:01:05.014884 1486138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/kubeconfig: {Name:mk9b20f6b5831bf91495a692140571471f3eef6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:01:05.015157 1486138 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 07:01:05.015269 1486138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 07:01:05.015537 1486138 config.go:182] Loaded profile config "scheduled-stop-273808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 07:01:05.015573 1486138 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:01:05.015636 1486138 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-273808"
	I1002 07:01:05.015654 1486138 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-273808"
	I1002 07:01:05.015675 1486138 host.go:66] Checking if "scheduled-stop-273808" exists ...
	I1002 07:01:05.016074 1486138 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-273808"
	I1002 07:01:05.016089 1486138 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-273808"
	I1002 07:01:05.016388 1486138 cli_runner.go:164] Run: docker container inspect scheduled-stop-273808 --format={{.State.Status}}
	I1002 07:01:05.016639 1486138 cli_runner.go:164] Run: docker container inspect scheduled-stop-273808 --format={{.State.Status}}
	I1002 07:01:05.019006 1486138 out.go:179] * Verifying Kubernetes components...
	I1002 07:01:05.022052 1486138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:01:05.070033 1486138 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:01:05.072213 1486138 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-273808"
	I1002 07:01:05.072242 1486138 host.go:66] Checking if "scheduled-stop-273808" exists ...
	I1002 07:01:05.072672 1486138 cli_runner.go:164] Run: docker container inspect scheduled-stop-273808 --format={{.State.Status}}
	I1002 07:01:05.073095 1486138 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:01:05.073104 1486138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:01:05.073151 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:01:05.108741 1486138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34154 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa Username:docker}
	I1002 07:01:05.116194 1486138 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:01:05.116207 1486138 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:01:05.116272 1486138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-273808
	I1002 07:01:05.141398 1486138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34154 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/scheduled-stop-273808/id_rsa Username:docker}
	I1002 07:01:05.307808 1486138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:01:05.308026 1486138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 07:01:05.337651 1486138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:01:05.416106 1486138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:01:05.808267 1486138 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 07:01:05.809988 1486138 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:01:05.810036 1486138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:01:06.005305 1486138 api_server.go:72] duration metric: took 990.119406ms to wait for apiserver process to appear ...
	I1002 07:01:06.005319 1486138 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:01:06.005337 1486138 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 07:01:06.021187 1486138 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 07:01:06.022554 1486138 api_server.go:141] control plane version: v1.34.1
	I1002 07:01:06.022572 1486138 api_server.go:131] duration metric: took 17.248151ms to wait for apiserver health ...
	I1002 07:01:06.022580 1486138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:01:06.027098 1486138 system_pods.go:59] 5 kube-system pods found
	I1002 07:01:06.027121 1486138 system_pods.go:61] "etcd-scheduled-stop-273808" [1c2a2789-fa1c-4bf0-ada8-41f2d004d52e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:01:06.027129 1486138 system_pods.go:61] "kube-apiserver-scheduled-stop-273808" [3fc5b5cb-05f0-4bf1-b437-069c7bcc10d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:01:06.027136 1486138 system_pods.go:61] "kube-controller-manager-scheduled-stop-273808" [cbd59581-ddb6-4cd6-beb6-79c028093c37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:01:06.027142 1486138 system_pods.go:61] "kube-scheduler-scheduled-stop-273808" [336f5874-5d3e-4158-8f2e-923382d3d81e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:01:06.027149 1486138 system_pods.go:61] "storage-provisioner" [1f35ade9-b08c-4065-9962-964aa478420d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 07:01:06.027158 1486138 system_pods.go:74] duration metric: took 4.569448ms to wait for pod list to return data ...
	I1002 07:01:06.027169 1486138 kubeadm.go:586] duration metric: took 1.011989442s to wait for: map[apiserver:true system_pods:true]
	I1002 07:01:06.027182 1486138 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:01:06.027436 1486138 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 07:01:06.030794 1486138 addons.go:514] duration metric: took 1.015200411s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 07:01:06.031354 1486138 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:01:06.031376 1486138 node_conditions.go:123] node cpu capacity is 2
	I1002 07:01:06.031407 1486138 node_conditions.go:105] duration metric: took 4.203596ms to run NodePressure ...
	I1002 07:01:06.031419 1486138 start.go:241] waiting for startup goroutines ...
	I1002 07:01:06.314988 1486138 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-273808" context rescaled to 1 replicas
	I1002 07:01:06.315009 1486138 start.go:246] waiting for cluster config update ...
	I1002 07:01:06.315019 1486138 start.go:255] writing updated cluster config ...
	I1002 07:01:06.315307 1486138 ssh_runner.go:195] Run: rm -f paused
	I1002 07:01:06.381275 1486138 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:01:06.384385 1486138 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-273808" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.682966338Z" level=info msg="Loading containers: done."
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.693883888Z" level=info msg="Docker daemon" commit=249d679 containerd-snapshotter=false storage-driver=overlay2 version=28.4.0
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.693960949Z" level=info msg="Initializing buildkit"
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.713273918Z" level=info msg="Completed buildkit initialization"
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.718695229Z" level=info msg="Daemon has completed initialization"
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.718791137Z" level=info msg="API listen on /run/docker.sock"
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.718885994Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 02 07:00:41 scheduled-stop-273808 dockerd[1130]: time="2025-10-02T07:00:41.719002472Z" level=info msg="API listen on [::]:2376"
	Oct 02 07:00:41 scheduled-stop-273808 systemd[1]: Started docker.service - Docker Application Container Engine.
	Oct 02 07:00:42 scheduled-stop-273808 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Start docker client with request timeout 0s"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Loaded network plugin cni"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 02 07:00:42 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:42Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 02 07:00:42 scheduled-stop-273808 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Oct 02 07:00:57 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9c970430a8cd508cf9d1c623b60f95d5123c49f79941046ec1965501ac5eb89/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 07:00:57 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/73c8c95438f48a3b3a90dca8ee25ff47ebaa9df7fae1e6aad925e9a97d1aebf6/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	Oct 02 07:00:57 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c96edf8923403862cb48d037c6e44c2045eafc8777fa10d285555dfd227792f8/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options ndots:0 edns0 trust-ad]"
	Oct 02 07:00:57 scheduled-stop-273808 cri-dockerd[1431]: time="2025-10-02T07:00:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/18b9f417dac0d1afa00eaaab11ae503b02d6275c5260f061044e9b4d578a487b/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	139929cebf074       b5f57ec6b9867       10 seconds ago      Running             kube-scheduler            0                   18b9f417dac0d       kube-scheduler-scheduled-stop-273808            kube-system
	2023557716beb       a1894772a478e       10 seconds ago      Running             etcd                      0                   c96edf8923403       etcd-scheduled-stop-273808                      kube-system
	45e4555fe66b7       43911e833d64d       10 seconds ago      Running             kube-apiserver            0                   73c8c95438f48       kube-apiserver-scheduled-stop-273808            kube-system
	d0fb3d5ca8854       7eb2c6ff0c5a7       10 seconds ago      Running             kube-controller-manager   0                   b9c970430a8cd       kube-controller-manager-scheduled-stop-273808   kube-system
	
	
	==> describe nodes <==
	Name:               scheduled-stop-273808
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-273808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=scheduled-stop-273808
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_01_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:01:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-273808
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:01:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:01:07 +0000   Thu, 02 Oct 2025 07:00:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:01:07 +0000   Thu, 02 Oct 2025 07:00:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:01:07 +0000   Thu, 02 Oct 2025 07:00:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:01:07 +0000   Thu, 02 Oct 2025 07:01:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-273808
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 56c28623810142b3b2babf1c6bdeee00
	  System UUID:                99de0c25-7938-4411-88a3-aa41d41e6d49
	  Boot ID:                    07f149ce-ad12-470a-acc5-7e688ae5314a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-273808                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-273808             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-273808    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-273808             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From     Message
	  ----     ------                   ----  ----     -------
	  Normal   Starting                 4s    kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s    kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s    kubelet  Node scheduled-stop-273808 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s    kubelet  Node scheduled-stop-273808 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s    kubelet  Node scheduled-stop-273808 status is now: NodeHasSufficientPID
	  Normal   NodeReady                1s    kubelet  Node scheduled-stop-273808 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 06:06] systemd-journald[223]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 2 06:20] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [2023557716be] <==
	{"level":"warn","ts":"2025-10-02T07:00:59.413615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.429319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.453257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.465419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.482789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.499148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.536428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.545421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.561990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.585445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.625521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.651004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.663960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.686421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.697241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.716789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.734127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.770128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.802826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.830309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.881473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.899814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.925916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:00:59.972114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:01:00.165781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40172","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:01:08 up  6:43,  0 user,  load average: 2.67, 2.73, 3.15
	Linux scheduled-stop-273808 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [45e4555fe66b] <==
	I1002 07:01:01.372651       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:01:01.373000       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 07:01:01.380844       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 07:01:01.384927       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:01:01.385061       1 policy_source.go:240] refreshing policies
	E1002 07:01:01.422850       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1002 07:01:01.438875       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:01:01.484338       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:01:01.486876       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:01:01.487386       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 07:01:01.505760       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:01:01.527831       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:01:02.159729       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 07:01:02.166706       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 07:01:02.166733       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:01:02.837170       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:01:02.894785       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:01:02.969051       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 07:01:02.977098       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 07:01:02.978449       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:01:02.983709       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:01:03.258052       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:01:04.236210       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:01:04.253935       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 07:01:04.265659       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [d0fb3d5ca885] <==
	I1002 07:01:07.319762       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1002 07:01:07.319782       1 controllermanager.go:781] "Started controller" controller="resourcequota-controller"
	I1002 07:01:07.319850       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1002 07:01:07.319861       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 07:01:07.319880       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1002 07:01:07.359567       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1002 07:01:07.359596       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I1002 07:01:07.364319       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1002 07:01:07.364347       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrapproving"
	I1002 07:01:07.554500       1 controllermanager.go:781] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1002 07:01:07.554536       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I1002 07:01:07.554588       1 shared_informer.go:349] "Waiting for caches to sync" controller="validatingadmissionpolicy-status"
	I1002 07:01:07.705970       1 controllermanager.go:781] "Started controller" controller="replicationcontroller-controller"
	I1002 07:01:07.706123       1 replica_set.go:243] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1002 07:01:07.706133       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicationController"
	I1002 07:01:07.856433       1 controllermanager.go:781] "Started controller" controller="pod-garbage-collector-controller"
	I1002 07:01:07.856556       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1002 07:01:07.856572       1 shared_informer.go:349] "Waiting for caches to sync" controller="GC"
	I1002 07:01:08.010505       1 controllermanager.go:781] "Started controller" controller="replicaset-controller"
	I1002 07:01:08.010579       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1002 07:01:08.010769       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I1002 07:01:08.156020       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1002 07:01:08.160360       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1002 07:01:08.160384       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1002 07:01:08.172969       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	
	
	==> kube-scheduler [139929cebf07] <==
	I1002 07:01:02.136044       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:01:02.138391       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:01:02.138431       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:01:02.139278       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:01:02.139543       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:01:02.145643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:01:02.147822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:01:02.148936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:01:02.149007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:01:02.152132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:01:02.152289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:01:02.152430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:01:02.152626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:01:02.152768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:01:02.152881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:01:02.156046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:01:02.156140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:01:02.156377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:01:02.156450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:01:02.156537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:01:02.158536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:01:02.158744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:01:02.158946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:01:02.159070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1002 07:01:03.139273       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589490    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a4224e9bf45d23c8d42b6ed054a5be9-kubeconfig\") pod \"kube-scheduler-scheduled-stop-273808\" (UID: \"9a4224e9bf45d23c8d42b6ed054a5be9\") " pod="kube-system/kube-scheduler-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589523    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00eaa4cd57314d0f43b27da561d2306e-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-273808\" (UID: \"00eaa4cd57314d0f43b27da561d2306e\") " pod="kube-system/kube-controller-manager-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589544    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00eaa4cd57314d0f43b27da561d2306e-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-273808\" (UID: \"00eaa4cd57314d0f43b27da561d2306e\") " pod="kube-system/kube-controller-manager-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589564    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8c6bc071c734c037404cae344cf27e44-etcd-certs\") pod \"etcd-scheduled-stop-273808\" (UID: \"8c6bc071c734c037404cae344cf27e44\") " pod="kube-system/etcd-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589582    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8c6bc071c734c037404cae344cf27e44-etcd-data\") pod \"etcd-scheduled-stop-273808\" (UID: \"8c6bc071c734c037404cae344cf27e44\") " pod="kube-system/etcd-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589598    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bd6dcd5dbca2297dd68dc00514bb6f0-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-273808\" (UID: \"8bd6dcd5dbca2297dd68dc00514bb6f0\") " pod="kube-system/kube-apiserver-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589613    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00eaa4cd57314d0f43b27da561d2306e-ca-certs\") pod \"kube-controller-manager-scheduled-stop-273808\" (UID: \"00eaa4cd57314d0f43b27da561d2306e\") " pod="kube-system/kube-controller-manager-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589630    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00eaa4cd57314d0f43b27da561d2306e-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-273808\" (UID: \"00eaa4cd57314d0f43b27da561d2306e\") " pod="kube-system/kube-controller-manager-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589659    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bd6dcd5dbca2297dd68dc00514bb6f0-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-273808\" (UID: \"8bd6dcd5dbca2297dd68dc00514bb6f0\") " pod="kube-system/kube-apiserver-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589684    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bd6dcd5dbca2297dd68dc00514bb6f0-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-273808\" (UID: \"8bd6dcd5dbca2297dd68dc00514bb6f0\") " pod="kube-system/kube-apiserver-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589701    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00eaa4cd57314d0f43b27da561d2306e-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-273808\" (UID: \"00eaa4cd57314d0f43b27da561d2306e\") " pod="kube-system/kube-controller-manager-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589717    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00eaa4cd57314d0f43b27da561d2306e-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-273808\" (UID: \"00eaa4cd57314d0f43b27da561d2306e\") " pod="kube-system/kube-controller-manager-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589746    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bd6dcd5dbca2297dd68dc00514bb6f0-ca-certs\") pod \"kube-apiserver-scheduled-stop-273808\" (UID: \"8bd6dcd5dbca2297dd68dc00514bb6f0\") " pod="kube-system/kube-apiserver-scheduled-stop-273808"
	Oct 02 07:01:04 scheduled-stop-273808 kubelet[2299]: I1002 07:01:04.589761    2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bd6dcd5dbca2297dd68dc00514bb6f0-k8s-certs\") pod \"kube-apiserver-scheduled-stop-273808\" (UID: \"8bd6dcd5dbca2297dd68dc00514bb6f0\") " pod="kube-system/kube-apiserver-scheduled-stop-273808"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.138774    2299 apiserver.go:52] "Watching apiserver"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.184510    2299 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.311663    2299 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-273808"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.311909    2299 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-273808"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: E1002 07:01:05.344957    2299 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-273808\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-273808"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: E1002 07:01:05.345607    2299 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-273808\" already exists" pod="kube-system/etcd-scheduled-stop-273808"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.375501    2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-273808" podStartSLOduration=3.375482902 podStartE2EDuration="3.375482902s" podCreationTimestamp="2025-10-02 07:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:01:05.355013058 +0000 UTC m=+1.294978952" watchObservedRunningTime="2025-10-02 07:01:05.375482902 +0000 UTC m=+1.315448812"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.397158    2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-273808" podStartSLOduration=1.397129467 podStartE2EDuration="1.397129467s" podCreationTimestamp="2025-10-02 07:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:01:05.376735339 +0000 UTC m=+1.316701241" watchObservedRunningTime="2025-10-02 07:01:05.397129467 +0000 UTC m=+1.337095369"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.418140    2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-273808" podStartSLOduration=1.418120826 podStartE2EDuration="1.418120826s" podCreationTimestamp="2025-10-02 07:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:01:05.397649448 +0000 UTC m=+1.337615342" watchObservedRunningTime="2025-10-02 07:01:05.418120826 +0000 UTC m=+1.358086720"
	Oct 02 07:01:05 scheduled-stop-273808 kubelet[2299]: I1002 07:01:05.418238    2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-273808" podStartSLOduration=1.418231905 podStartE2EDuration="1.418231905s" podCreationTimestamp="2025-10-02 07:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:01:05.414630273 +0000 UTC m=+1.354596183" watchObservedRunningTime="2025-10-02 07:01:05.418231905 +0000 UTC m=+1.358197815"
	Oct 02 07:01:07 scheduled-stop-273808 kubelet[2299]: I1002 07:01:07.482819    2299 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-273808 -n scheduled-stop-273808
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-273808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: kube-proxy-hww7j storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-273808 describe pod kube-proxy-hww7j storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-273808 describe pod kube-proxy-hww7j storage-provisioner: exit status 1 (99.92557ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-proxy-hww7j" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-273808 describe pod kube-proxy-hww7j storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-273808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-273808
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-273808: (2.192486165s)
--- FAIL: TestScheduledStopUnix (42.09s)

                                                
                                    

Test pass (320/347)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.28
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.7
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.41
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.62
22 TestOffline 92.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 184.64
29 TestAddons/serial/Volcano 42.69
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.97
35 TestAddons/parallel/Registry 15.09
36 TestAddons/parallel/RegistryCreds 0.78
37 TestAddons/parallel/Ingress 19.42
38 TestAddons/parallel/InspektorGadget 5.23
39 TestAddons/parallel/MetricsServer 6
41 TestAddons/parallel/CSI 49.82
42 TestAddons/parallel/Headlamp 17.26
43 TestAddons/parallel/CloudSpanner 5.78
44 TestAddons/parallel/LocalPath 51.5
45 TestAddons/parallel/NvidiaDevicePlugin 5.61
46 TestAddons/parallel/Yakd 11.76
48 TestAddons/StoppedEnableDisable 11.29
49 TestCertOptions 37.32
50 TestCertExpiration 271.33
51 TestDockerFlags 45.88
52 TestForceSystemdFlag 49.33
53 TestForceSystemdEnv 48.28
59 TestErrorSpam/setup 35.16
60 TestErrorSpam/start 0.81
61 TestErrorSpam/status 1.08
62 TestErrorSpam/pause 1.56
63 TestErrorSpam/unpause 1.75
64 TestErrorSpam/stop 11.17
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.1
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 51.84
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
76 TestFunctional/serial/CacheCmd/cache/add_local 1.11
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.16
78 TestFunctional/serial/CacheCmd/cache/list 0.1
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.53
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 55.87
85 TestFunctional/serial/ComponentHealth 0.09
86 TestFunctional/serial/LogsCmd 1.36
87 TestFunctional/serial/LogsFileCmd 1.34
88 TestFunctional/serial/InvalidService 4.82
90 TestFunctional/parallel/ConfigCmd 0.5
91 TestFunctional/parallel/DashboardCmd 12.22
92 TestFunctional/parallel/DryRun 0.44
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.4
98 TestFunctional/parallel/ServiceCmdConnect 8.71
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 26.8
102 TestFunctional/parallel/SSHCmd 0.61
103 TestFunctional/parallel/CpCmd 2.16
105 TestFunctional/parallel/FileSync 0.4
106 TestFunctional/parallel/CertSync 2.26
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
114 TestFunctional/parallel/License 0.35
115 TestFunctional/parallel/Version/short 0.08
116 TestFunctional/parallel/Version/components 1.34
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.42
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.43
122 TestFunctional/parallel/ImageCommands/Setup 0.65
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
124 TestFunctional/parallel/DockerEnv/bash 1.35
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.28
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.35
140 TestFunctional/parallel/ServiceCmd/List 0.35
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
143 TestFunctional/parallel/ServiceCmd/Format 0.36
144 TestFunctional/parallel/ServiceCmd/URL 0.38
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
152 TestFunctional/parallel/ProfileCmd/profile_list 0.51
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
154 TestFunctional/parallel/MountCmd/any-port 8.27
155 TestFunctional/parallel/MountCmd/specific-port 2.26
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
157 TestFunctional/delete_echo-server_images 0.06
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 174.76
165 TestMultiControlPlane/serial/DeployApp 7.81
166 TestMultiControlPlane/serial/PingHostFromPods 1.79
167 TestMultiControlPlane/serial/AddWorkerNode 39.45
168 TestMultiControlPlane/serial/NodeLabels 0.1
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
170 TestMultiControlPlane/serial/CopyFile 20.4
171 TestMultiControlPlane/serial/StopSecondaryNode 11.92
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
173 TestMultiControlPlane/serial/RestartSecondaryNode 46.75
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.17
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 222.85
176 TestMultiControlPlane/serial/DeleteSecondaryNode 11.62
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
178 TestMultiControlPlane/serial/StopCluster 33.24
179 TestMultiControlPlane/serial/RestartCluster 116.48
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
181 TestMultiControlPlane/serial/AddSecondaryNode 96.37
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
185 TestImageBuild/serial/Setup 36.23
186 TestImageBuild/serial/NormalBuild 1.81
187 TestImageBuild/serial/BuildWithBuildArg 1.24
188 TestImageBuild/serial/BuildWithDockerIgnore 0.9
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.96
193 TestJSONOutput/start/Command 74.45
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.64
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.58
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 5.88
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 41.92
219 TestKicCustomNetwork/use_default_bridge_network 37.61
220 TestKicExistingNetwork 36.34
221 TestKicCustomSubnet 39.95
222 TestKicStaticIP 39.83
223 TestMainNoArgs 0.06
224 TestMinikubeProfile 72.8
227 TestMountStart/serial/StartWithMountFirst 8.73
228 TestMountStart/serial/VerifyMountFirst 0.27
229 TestMountStart/serial/StartWithMountSecond 11.12
230 TestMountStart/serial/VerifyMountSecond 0.28
231 TestMountStart/serial/DeleteFirst 1.5
232 TestMountStart/serial/VerifyMountPostDelete 0.28
233 TestMountStart/serial/Stop 1.21
234 TestMountStart/serial/RestartStopped 8.39
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 92.91
239 TestMultiNode/serial/DeployApp2Nodes 5.33
240 TestMultiNode/serial/PingHostFrom2Pods 1.04
241 TestMultiNode/serial/AddNode 35
242 TestMultiNode/serial/MultiNodeLabels 0.12
243 TestMultiNode/serial/ProfileList 0.95
244 TestMultiNode/serial/CopyFile 10.31
245 TestMultiNode/serial/StopNode 2.32
246 TestMultiNode/serial/StartAfterStop 11.05
247 TestMultiNode/serial/RestartKeepsNodes 79.71
248 TestMultiNode/serial/DeleteNode 5.79
249 TestMultiNode/serial/StopMultiNode 21.93
250 TestMultiNode/serial/RestartMultiNode 53.16
251 TestMultiNode/serial/ValidateNameConflict 34.75
256 TestPreload 120.58
259 TestSkaffold 144.85
261 TestInsufficientStorage 14.03
262 TestRunningBinaryUpgrade 93.27
264 TestKubernetesUpgrade 384.84
265 TestMissingContainerUpgrade 105.49
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
268 TestNoKubernetes/serial/StartWithK8s 43.02
269 TestNoKubernetes/serial/StartWithStopK8s 18.32
270 TestNoKubernetes/serial/Start 10.99
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
272 TestNoKubernetes/serial/ProfileList 1.1
273 TestNoKubernetes/serial/Stop 1.24
274 TestNoKubernetes/serial/StartNoArgs 7.75
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
287 TestStoppedBinaryUpgrade/Setup 0.68
288 TestStoppedBinaryUpgrade/Upgrade 92.46
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
298 TestPause/serial/Start 78.98
299 TestPause/serial/SecondStartNoReconfiguration 51.58
300 TestPause/serial/Pause 0.65
301 TestPause/serial/VerifyStatus 0.33
302 TestPause/serial/Unpause 0.75
303 TestPause/serial/PauseAgain 1
304 TestPause/serial/DeletePaused 2.21
305 TestPause/serial/VerifyDeletedResources 16.03
306 TestNetworkPlugins/group/auto/Start 55.77
307 TestNetworkPlugins/group/auto/KubeletFlags 0.4
308 TestNetworkPlugins/group/auto/NetCatPod 10.29
309 TestNetworkPlugins/group/auto/DNS 0.36
310 TestNetworkPlugins/group/auto/Localhost 0.22
311 TestNetworkPlugins/group/auto/HairPin 0.22
312 TestNetworkPlugins/group/kindnet/Start 68.21
313 TestNetworkPlugins/group/calico/Start 58.91
314 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
316 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
317 TestNetworkPlugins/group/kindnet/DNS 0.22
318 TestNetworkPlugins/group/kindnet/Localhost 0.47
319 TestNetworkPlugins/group/kindnet/HairPin 0.26
320 TestNetworkPlugins/group/calico/ControllerPod 6.01
321 TestNetworkPlugins/group/calico/KubeletFlags 0.45
322 TestNetworkPlugins/group/calico/NetCatPod 11.36
323 TestNetworkPlugins/group/custom-flannel/Start 59.29
324 TestNetworkPlugins/group/calico/DNS 0.2
325 TestNetworkPlugins/group/calico/Localhost 0.22
326 TestNetworkPlugins/group/calico/HairPin 0.18
327 TestNetworkPlugins/group/false/Start 77.66
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.38
330 TestNetworkPlugins/group/custom-flannel/DNS 0.18
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
333 TestNetworkPlugins/group/enable-default-cni/Start 81.82
334 TestNetworkPlugins/group/false/KubeletFlags 0.41
335 TestNetworkPlugins/group/false/NetCatPod 10.37
336 TestNetworkPlugins/group/false/DNS 0.26
337 TestNetworkPlugins/group/false/Localhost 0.2
338 TestNetworkPlugins/group/false/HairPin 0.26
339 TestNetworkPlugins/group/flannel/Start 52.68
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.37
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.25
345 TestNetworkPlugins/group/flannel/ControllerPod 6.01
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.51
347 TestNetworkPlugins/group/flannel/NetCatPod 12.38
348 TestNetworkPlugins/group/bridge/Start 80.76
349 TestNetworkPlugins/group/flannel/DNS 0.47
350 TestNetworkPlugins/group/flannel/Localhost 0.42
351 TestNetworkPlugins/group/flannel/HairPin 0.37
352 TestNetworkPlugins/group/kubenet/Start 83.41
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
354 TestNetworkPlugins/group/bridge/NetCatPod 9.31
355 TestNetworkPlugins/group/bridge/DNS 0.21
356 TestNetworkPlugins/group/bridge/Localhost 0.16
357 TestNetworkPlugins/group/bridge/HairPin 0.18
359 TestStartStop/group/old-k8s-version/serial/FirstStart 97.77
360 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
361 TestNetworkPlugins/group/kubenet/NetCatPod 10.46
362 TestNetworkPlugins/group/kubenet/DNS 0.22
363 TestNetworkPlugins/group/kubenet/Localhost 0.21
364 TestNetworkPlugins/group/kubenet/HairPin 0.3
366 TestStartStop/group/no-preload/serial/FirstStart 86.84
367 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
369 TestStartStop/group/old-k8s-version/serial/Stop 10.98
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
371 TestStartStop/group/old-k8s-version/serial/SecondStart 57.52
372 TestStartStop/group/no-preload/serial/DeployApp 9.46
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.38
374 TestStartStop/group/no-preload/serial/Stop 11.29
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
376 TestStartStop/group/no-preload/serial/SecondStart 54.2
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
380 TestStartStop/group/old-k8s-version/serial/Pause 3.23
382 TestStartStop/group/embed-certs/serial/FirstStart 81.57
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
386 TestStartStop/group/no-preload/serial/Pause 3.81
388 TestStartStop/group/newest-cni/serial/FirstStart 45.67
389 TestStartStop/group/newest-cni/serial/DeployApp 0
390 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
391 TestStartStop/group/newest-cni/serial/Stop 5.92
392 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
393 TestStartStop/group/newest-cni/serial/SecondStart 20.7
394 TestStartStop/group/embed-certs/serial/DeployApp 10.52
395 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.79
396 TestStartStop/group/embed-certs/serial/Stop 11.64
397 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
400 TestStartStop/group/newest-cni/serial/Pause 3.86
401 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
402 TestStartStop/group/embed-certs/serial/SecondStart 58.76
404 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.76
405 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
407 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
408 TestStartStop/group/embed-certs/serial/Pause 3.71
409 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.34
410 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
411 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.83
412 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
413 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.97
414 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
415 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
416 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
417 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
x
+
TestDownloadOnly/v1.28.0/json-events (7.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-850350 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-850350 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.281477747s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 06:20:46.152820 1283508 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1002 06:20:46.152909 1283508 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-850350
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-850350: exit status 85 (97.195652ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-850350 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-850350 │ jenkins │ v1.37.0 │ 02 Oct 25 06:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:20:38
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:20:38.920515 1283513 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:20:38.920656 1283513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:20:38.920667 1283513 out.go:374] Setting ErrFile to fd 2...
	I1002 06:20:38.920672 1283513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:20:38.920945 1283513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	W1002 06:20:38.921083 1283513 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21643-1281649/.minikube/config/config.json: open /home/jenkins/minikube-integration/21643-1281649/.minikube/config/config.json: no such file or directory
	I1002 06:20:38.921469 1283513 out.go:368] Setting JSON to true
	I1002 06:20:38.922312 1283513 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21773,"bootTime":1759364266,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 06:20:38.922375 1283513 start.go:140] virtualization:  
	I1002 06:20:38.926431 1283513 out.go:99] [download-only-850350] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 06:20:38.926605 1283513 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 06:20:38.926715 1283513 notify.go:220] Checking for updates...
	I1002 06:20:38.930481 1283513 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:20:38.933334 1283513 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:20:38.936225 1283513 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	I1002 06:20:38.939053 1283513 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	I1002 06:20:38.942002 1283513 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 06:20:38.947677 1283513 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:20:38.947970 1283513 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:20:38.976157 1283513 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:20:38.976358 1283513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:20:39.032553 1283513 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 06:20:39.023387528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:20:39.032670 1283513 docker.go:318] overlay module found
	I1002 06:20:39.035715 1283513 out.go:99] Using the docker driver based on user configuration
	I1002 06:20:39.035758 1283513 start.go:304] selected driver: docker
	I1002 06:20:39.035766 1283513 start.go:924] validating driver "docker" against <nil>
	I1002 06:20:39.035868 1283513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:20:39.088520 1283513 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 06:20:39.07914859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:20:39.088671 1283513 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:20:39.088943 1283513 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 06:20:39.089102 1283513 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:20:39.092236 1283513 out.go:171] Using Docker driver with root privileges
	I1002 06:20:39.095158 1283513 cni.go:84] Creating CNI manager for ""
	I1002 06:20:39.095236 1283513 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 06:20:39.095251 1283513 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 06:20:39.095335 1283513 start.go:348] cluster config:
	{Name:download-only-850350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-850350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:20:39.098232 1283513 out.go:99] Starting "download-only-850350" primary control-plane node in "download-only-850350" cluster
	I1002 06:20:39.098262 1283513 cache.go:123] Beginning downloading kic base image for docker with docker
	I1002 06:20:39.101036 1283513 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:20:39.101065 1283513 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1002 06:20:39.101215 1283513 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:20:39.117257 1283513 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:20:39.118086 1283513 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:20:39.118193 1283513 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:20:39.160571 1283513 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1002 06:20:39.160599 1283513 cache.go:58] Caching tarball of preloaded images
	I1002 06:20:39.161425 1283513 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1002 06:20:39.166762 1283513 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 06:20:39.166791 1283513 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1002 06:20:39.247585 1283513 preload.go:290] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1002 06:20:39.247712 1283513 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1002 06:20:41.728665 1283513 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1002 06:20:41.729104 1283513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/download-only-850350/config.json ...
	I1002 06:20:41.729141 1283513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/download-only-850350/config.json: {Name:mkf9bbfd3e0010fe3223845ad4583f6a4e74ebdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:20:41.730034 1283513 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1002 06:20:41.731047 1283513 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-850350 host does not exist
	  To start a cluster, run: "minikube start -p download-only-850350"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-850350
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-973179 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-973179 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.702788658s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 06:20:50.308114 1283508 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1002 06:20:50.308158 1283508 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-1281649/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-973179
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-973179: exit status 85 (406.057104ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-850350 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-850350 │ jenkins │ v1.37.0 │ 02 Oct 25 06:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:20 UTC │ 02 Oct 25 06:20 UTC │
	│ delete  │ -p download-only-850350                                                                                                                                                       │ download-only-850350 │ jenkins │ v1.37.0 │ 02 Oct 25 06:20 UTC │ 02 Oct 25 06:20 UTC │
	│ start   │ -o=json --download-only -p download-only-973179 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-973179 │ jenkins │ v1.37.0 │ 02 Oct 25 06:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:20:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:20:46.649638 1283715 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:20:46.649755 1283715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:20:46.649765 1283715 out.go:374] Setting ErrFile to fd 2...
	I1002 06:20:46.649772 1283715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:20:46.650027 1283715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 06:20:46.650427 1283715 out.go:368] Setting JSON to true
	I1002 06:20:46.651300 1283715 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21781,"bootTime":1759364266,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 06:20:46.651366 1283715 start.go:140] virtualization:  
	I1002 06:20:46.654702 1283715 out.go:99] [download-only-973179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:20:46.654978 1283715 notify.go:220] Checking for updates...
	I1002 06:20:46.659405 1283715 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:20:46.662449 1283715 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:20:46.665379 1283715 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	I1002 06:20:46.668347 1283715 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	I1002 06:20:46.671247 1283715 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 06:20:46.677209 1283715 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:20:46.677510 1283715 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:20:46.710093 1283715 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:20:46.710197 1283715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:20:46.770161 1283715 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-02 06:20:46.761157556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:20:46.770271 1283715 docker.go:318] overlay module found
	I1002 06:20:46.773242 1283715 out.go:99] Using the docker driver based on user configuration
	I1002 06:20:46.773282 1283715 start.go:304] selected driver: docker
	I1002 06:20:46.773289 1283715 start.go:924] validating driver "docker" against <nil>
	I1002 06:20:46.773408 1283715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:20:46.835216 1283715 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-02 06:20:46.826105568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:20:46.835381 1283715 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:20:46.835667 1283715 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 06:20:46.835825 1283715 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:20:46.838954 1283715 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-973179 host does not exist
	  To start a cluster, run: "minikube start -p download-only-973179"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-973179
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 06:20:51.818851 1283508 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-785915 --alsologtostderr --binary-mirror http://127.0.0.1:36675 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-785915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-785915
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (92.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-200011 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-200011 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m29.54698631s)
helpers_test.go:175: Cleaning up "offline-docker-200011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-200011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-200011: (2.734758642s)
--- PASS: TestOffline (92.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-096496
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-096496: exit status 85 (69.479434ms)

                                                
                                                
-- stdout --
	* Profile "addons-096496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-096496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-096496
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-096496: exit status 85 (80.921096ms)

                                                
                                                
-- stdout --
	* Profile "addons-096496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-096496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (184.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-096496 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-096496 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m4.634458041s)
--- PASS: TestAddons/Setup (184.64s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.69s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 70.828722ms
addons_test.go:884: volcano-controller stabilized in 71.607733ms
addons_test.go:868: volcano-scheduler stabilized in 71.632167ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-plvx8" [9b3e83ff-c67a-450e-b4ec-0b6b53b3a677] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00409582s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-np9vz" [1abea102-0bec-454c-b8bf-c7ee560e49c4] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003767216s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-lv2g7" [0ac9b79c-3842-4cb0-999d-884014533876] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003546604s
addons_test.go:903: (dbg) Run:  kubectl --context addons-096496 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-096496 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-096496 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [cc0fad69-9e14-4a22-b29c-bcc57193b6cf] Pending
helpers_test.go:352: "test-job-nginx-0" [cc0fad69-9e14-4a22-b29c-bcc57193b6cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [cc0fad69-9e14-4a22-b29c-bcc57193b6cf] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.002938038s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-096496 addons disable volcano --alsologtostderr -v=1: (11.986122225s)
--- PASS: TestAddons/serial/Volcano (42.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-096496 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-096496 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.97s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-096496 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-096496 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a46b25bd-2634-4058-85c9-97835e46c2a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a46b25bd-2634-4058-85c9-97835e46c2a6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004136679s
addons_test.go:694: (dbg) Run:  kubectl --context addons-096496 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-096496 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-096496 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-096496 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.97s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.525613ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-xhbm2" [f123ab0b-974b-484f-8100-ba7354a99b47] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003450685s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wgbqk" [62a7dec5-34fe-4e22-8bd4-3db7ce8aa48e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002967115s
addons_test.go:392: (dbg) Run:  kubectl --context addons-096496 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-096496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-096496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.131534704s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 ip
2025/10/02 06:25:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.09s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.4879ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-096496
addons_test.go:332: (dbg) Run:  kubectl --context addons-096496 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-096496 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-096496 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-096496 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [40255fd9-6d7c-4a55-b826-ff11eb29af95] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [40255fd9-6d7c-4a55-b826-ff11eb29af95] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003773782s
I1002 06:26:25.538400 1283508 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-096496 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-096496 addons disable ingress-dns --alsologtostderr -v=1: (1.977832247s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-096496 addons disable ingress --alsologtostderr -v=1: (7.761820287s)
--- PASS: TestAddons/parallel/Ingress (19.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-v852l" [ea83c689-360b-48d3-9f36-5685f44ce4af] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003657421s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 43.993148ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6fr4x" [72ebdbc7-c93d-44d9-a946-d7aef753c3f6] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005997566s
addons_test.go:463: (dbg) Run:  kubectl --context addons-096496 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 06:25:39.514519 1283508 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 06:25:39.519010 1283508 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 06:25:39.519036 1283508 kapi.go:107] duration metric: took 7.219995ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.231014ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-096496 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-096496 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [db97b567-6593-454b-8b26-0bf650e17c8e] Pending
helpers_test.go:352: "task-pv-pod" [db97b567-6593-454b-8b26-0bf650e17c8e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [db97b567-6593-454b-8b26-0bf650e17c8e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003422345s
addons_test.go:572: (dbg) Run:  kubectl --context addons-096496 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-096496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-096496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-096496 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-096496 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-096496 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-096496 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e789366e-35f6-432d-b969-a01fd4e0a590] Pending
helpers_test.go:352: "task-pv-pod-restore" [e789366e-35f6-432d-b969-a01fd4e0a590] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [e789366e-35f6-432d-b969-a01fd4e0a590] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003337324s
addons_test.go:614: (dbg) Run:  kubectl --context addons-096496 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-096496 delete pod task-pv-pod-restore: (1.474488395s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-096496 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-096496 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-096496 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.973720039s)
--- PASS: TestAddons/parallel/CSI (49.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-096496 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-096496 --alsologtostderr -v=1: (1.557242944s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-gk2sl" [0aafc7ba-dcf9-4786-b930-2f27d7074254] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-gk2sl" [0aafc7ba-dcf9-4786-b930-2f27d7074254] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004962513s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-096496 addons disable headlamp --alsologtostderr -v=1: (5.694488404s)
--- PASS: TestAddons/parallel/Headlamp (17.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-mgvrb" [bfe1a985-d1fb-4a9e-9acf-b107736f5c7a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004360125s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-096496 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-096496 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-096496 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [16050287-0e39-42de-aad0-f106b977ec27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [16050287-0e39-42de-aad0-f106b977ec27] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [16050287-0e39-42de-aad0-f106b977ec27] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004889965s
addons_test.go:967: (dbg) Run:  kubectl --context addons-096496 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 ssh "cat /opt/local-path-provisioner/pvc-a9956497-b730-4b3f-8ca6-adfca3c2d4fd_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-096496 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-096496 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-096496 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.972901688s)
--- PASS: TestAddons/parallel/LocalPath (51.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hr5mp" [8c1a645f-cc8a-4900-904b-dde798df2310] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006973815s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lqct7" [526bc291-8b11-4a23-94e9-12669eb063a0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003950731s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-096496 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-096496 addons disable yakd --alsologtostderr -v=1: (5.75043079s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-096496
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-096496: (11.008818537s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-096496
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-096496
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-096496
--- PASS: TestAddons/StoppedEnableDisable (11.29s)

                                                
                                    
x
+
TestCertOptions (37.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-430710 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-430710 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.475895107s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-430710 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-430710 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-430710 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-430710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-430710
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-430710: (2.131129925s)
--- PASS: TestCertOptions (37.32s)

                                                
                                    
x
+
TestCertExpiration (271.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-165907 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1002 07:06:12.962610 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-165907 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (43.170688188s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-165907 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-165907 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (45.378877084s)
helpers_test.go:175: Cleaning up "cert-expiration-165907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-165907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-165907: (2.78163452s)
--- PASS: TestCertExpiration (271.33s)

                                                
                                    
x
+
TestDockerFlags (45.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-870856 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-870856 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.696555728s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-870856 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-870856 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-870856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-870856
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-870856: (2.362282463s)
--- PASS: TestDockerFlags (45.88s)

                                                
                                    
x
+
TestForceSystemdFlag (49.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-171404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-171404 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.402415192s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-171404 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-171404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-171404
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-171404: (3.382444566s)
--- PASS: TestForceSystemdFlag (49.33s)

                                                
                                    
x
+
TestForceSystemdEnv (48.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-460031 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-460031 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.163843977s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-460031 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-460031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-460031
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-460031: (2.529947252s)
--- PASS: TestForceSystemdEnv (48.28s)

                                                
                                    
x
+
TestErrorSpam/setup (35.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-077077 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-077077 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-077077 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-077077 --driver=docker  --container-runtime=docker: (35.155950894s)
--- PASS: TestErrorSpam/setup (35.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (11.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 stop: (10.910338267s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077077 --log_dir /tmp/nospam-077077 stop
--- PASS: TestErrorSpam/stop (11.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21643-1281649/.minikube/files/etc/test/nested/copy/1283508/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970698 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E1002 06:28:57.198995 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:57.206194 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:57.217659 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:57.239104 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:57.280637 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:57.362101 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:57.523556 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:57.844890 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:58.486465 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:59.768225 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:29:02.330345 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-970698 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m20.096154544s)
--- PASS: TestFunctional/serial/StartWithProxy (80.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 06:29:03.966246 1283508 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970698 --alsologtostderr -v=8
E1002 06:29:07.452214 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:29:17.694066 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:29:38.175680 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-970698 --alsologtostderr -v=8: (51.837967908s)
functional_test.go:678: soft start took 51.839281652s for "functional-970698" cluster.
I1002 06:29:55.804559 1283508 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (51.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-970698 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-970698 cache add registry.k8s.io/pause:3.1: (1.040935138s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-970698 cache add registry.k8s.io/pause:3.3: (1.097257772s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-970698 /tmp/TestFunctionalserialCacheCmdcacheadd_local1061637400/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cache add minikube-local-cache-test:functional-970698
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cache delete minikube-local-cache-test:functional-970698
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-970698
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (319.458333ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 kubectl -- --context functional-970698 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-970698 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970698 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 06:30:19.137599 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-970698 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.872165574s)
functional_test.go:776: restart took 55.872266914s for "functional-970698" cluster.
I1002 06:30:58.798311 1283508 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (55.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-970698 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-970698 logs: (1.358379941s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 logs --file /tmp/TestFunctionalserialLogsFileCmd1107816777/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-970698 logs --file /tmp/TestFunctionalserialLogsFileCmd1107816777/001/logs.txt: (1.32566639s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-970698 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-970698
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-970698: exit status 115 (738.196712ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31674 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-970698 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 config get cpus: exit status 14 (96.727491ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 config get cpus: exit status 14 (67.29958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-970698 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-970698 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1327975: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-970698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (199.884162ms)

                                                
                                                
-- stdout --
	* [functional-970698] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:31:47.641246 1327603 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:31:47.641375 1327603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:47.641387 1327603 out.go:374] Setting ErrFile to fd 2...
	I1002 06:31:47.641391 1327603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:47.641671 1327603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 06:31:47.642049 1327603 out.go:368] Setting JSON to false
	I1002 06:31:47.643019 1327603 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22442,"bootTime":1759364266,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 06:31:47.643091 1327603 start.go:140] virtualization:  
	I1002 06:31:47.648816 1327603 out.go:179] * [functional-970698] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:31:47.651985 1327603 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:31:47.652031 1327603 notify.go:220] Checking for updates...
	I1002 06:31:47.658156 1327603 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:31:47.661164 1327603 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	I1002 06:31:47.664127 1327603 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	I1002 06:31:47.667123 1327603 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:31:47.670197 1327603 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:31:47.673599 1327603 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 06:31:47.674242 1327603 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:31:47.702720 1327603 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:31:47.702856 1327603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:47.766870 1327603 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 06:31:47.756930349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:31:47.766992 1327603 docker.go:318] overlay module found
	I1002 06:31:47.770126 1327603 out.go:179] * Using the docker driver based on existing profile
	I1002 06:31:47.773147 1327603 start.go:304] selected driver: docker
	I1002 06:31:47.773182 1327603 start.go:924] validating driver "docker" against &{Name:functional-970698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970698 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:47.773329 1327603 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:31:47.776762 1327603 out.go:203] 
	W1002 06:31:47.779652 1327603 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 06:31:47.782768 1327603 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970698 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-970698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (217.205553ms)

                                                
                                                
-- stdout --
	* [functional-970698] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:31:48.083250 1327722 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:31:48.083426 1327722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:48.083438 1327722 out.go:374] Setting ErrFile to fd 2...
	I1002 06:31:48.083445 1327722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:31:48.085164 1327722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 06:31:48.085692 1327722 out.go:368] Setting JSON to false
	I1002 06:31:48.086714 1327722 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22442,"bootTime":1759364266,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 06:31:48.086790 1327722 start.go:140] virtualization:  
	I1002 06:31:48.090188 1327722 out.go:179] * [functional-970698] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 06:31:48.093180 1327722 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:31:48.093356 1327722 notify.go:220] Checking for updates...
	I1002 06:31:48.099061 1327722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:31:48.102018 1327722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	I1002 06:31:48.104944 1327722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	I1002 06:31:48.107972 1327722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:31:48.110937 1327722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:31:48.114401 1327722 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 06:31:48.114971 1327722 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:31:48.156215 1327722 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:31:48.156341 1327722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:31:48.223039 1327722 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 06:31:48.213665628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:31:48.223150 1327722 docker.go:318] overlay module found
	I1002 06:31:48.226365 1327722 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 06:31:48.229229 1327722 start.go:304] selected driver: docker
	I1002 06:31:48.229253 1327722 start.go:924] validating driver "docker" against &{Name:functional-970698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970698 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:31:48.229376 1327722 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:31:48.232867 1327722 out.go:203] 
	W1002 06:31:48.235713 1327722 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 06:31:48.238535 1327722 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-970698 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-970698 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-btv2f" [de94bb65-88e8-458b-9402-aaf3b5b6c239] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-btv2f" [de94bb65-88e8-458b-9402-aaf3b5b6c239] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003830036s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32696
functional_test.go:1680: http://192.168.49.2:32696: success! body:
Request served by hello-node-connect-7d85dfc575-btv2f

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32696
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [3cd55b51-23b0-4fbe-b718-f0bf61e30c85] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002909873s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-970698 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-970698 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-970698 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-970698 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [059612ae-2be0-4945-ad3c-62fece807821] Pending
helpers_test.go:352: "sp-pod" [059612ae-2be0-4945-ad3c-62fece807821] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [059612ae-2be0-4945-ad3c-62fece807821] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00374122s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-970698 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-970698 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-970698 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d5a5bf3f-6df4-4fff-806a-54f0725a9930] Pending
helpers_test.go:352: "sp-pod" [d5a5bf3f-6df4-4fff-806a-54f0725a9930] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d5a5bf3f-6df4-4fff-806a-54f0725a9930] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010200936s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-970698 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh -n functional-970698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cp functional-970698:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2582964457/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh -n functional-970698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh -n functional-970698 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1283508/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo cat /etc/test/nested/copy/1283508/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1283508.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo cat /etc/ssl/certs/1283508.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1283508.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo cat /usr/share/ca-certificates/1283508.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12835082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo cat /etc/ssl/certs/12835082.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12835082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo cat /usr/share/ca-certificates/12835082.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-970698 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 ssh "sudo systemctl is-active crio": exit status 1 (393.492305ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-970698 version -o=json --components: (1.340555809s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970698 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-970698
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-970698
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970698 image ls --format short --alsologtostderr:
I1002 06:31:51.706821 1328405 out.go:360] Setting OutFile to fd 1 ...
I1002 06:31:51.707045 1328405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:51.707072 1328405 out.go:374] Setting ErrFile to fd 2...
I1002 06:31:51.707093 1328405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:51.707373 1328405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
I1002 06:31:51.708009 1328405 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:51.709767 1328405 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:51.710301 1328405 cli_runner.go:164] Run: docker container inspect functional-970698 --format={{.State.Status}}
I1002 06:31:51.735223 1328405 ssh_runner.go:195] Run: systemctl --version
I1002 06:31:51.735281 1328405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970698
I1002 06:31:51.755608 1328405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33964 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/functional-970698/id_rsa Username:docker}
I1002 06:31:51.881005 1328405 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970698 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-970698 │ f66fa89271455 │ 30B    │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ 05baa95f5142d │ 74.7MB │
│ docker.io/library/nginx                     │ latest            │ 0777d15d89ece │ 198MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
│ localhost/my-image                          │ functional-970698 │ a2b890c993c63 │ 1.41MB │
│ docker.io/library/nginx                     │ alpine            │ 35f3cbee4fb77 │ 52.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 20b332c9a70d8 │ 244MB  │
│ docker.io/kicbase/echo-server               │ functional-970698 │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kicbase/echo-server               │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ a422e0e982356 │ 42.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ 43911e833d64d │ 83.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ 7eb2c6ff0c5a7 │ 71.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ b5f57ec6b9867 │ 50.5MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970698 image ls --format table --alsologtostderr:
I1002 06:31:57.142922 1328867 out.go:360] Setting OutFile to fd 1 ...
I1002 06:31:57.143144 1328867 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:57.143172 1328867 out.go:374] Setting ErrFile to fd 2...
I1002 06:31:57.143188 1328867 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:57.143455 1328867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
I1002 06:31:57.144158 1328867 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:57.144339 1328867 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:57.144876 1328867 cli_runner.go:164] Run: docker container inspect functional-970698 --format={{.State.Status}}
I1002 06:31:57.164934 1328867 ssh_runner.go:195] Run: systemctl --version
I1002 06:31:57.164990 1328867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970698
I1002 06:31:57.198143 1328867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33964 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/functional-970698/id_rsa Username:docker}
I1002 06:31:57.311524 1328867 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2025/10/02 06:32:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970698 image ls --format json --alsologtostderr:
[{"id":"0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"198000000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"f66fa892714556761a110ac5240b824cb0a49c99ecc79977b17bcb2389620e6c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-970698"],"size":"30"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"83700000"},{"id":"05baa95f5142d87797a2bc1d3d1
1edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"74700000"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"a2b890c993c63e104bf22ab77d626c743e75b0e4391ae3815d3ec04e8303f1a0","repoDigests":[],"repoTags":["localhost/my-image:functional-970698"],"size":"1410000"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"71500000"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":[],"
repoTags":["docker.io/library/nginx:alpine"],"size":"52900000"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-970698","docker.io/kicbase/echo-server:latest"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"50500000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"
repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970698 image ls --format json --alsologtostderr:
I1002 06:31:56.870717 1328832 out.go:360] Setting OutFile to fd 1 ...
I1002 06:31:56.870912 1328832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:56.870938 1328832 out.go:374] Setting ErrFile to fd 2...
I1002 06:31:56.870958 1328832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:56.871222 1328832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
I1002 06:31:56.871860 1328832 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:56.872035 1328832 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:56.872700 1328832 cli_runner.go:164] Run: docker container inspect functional-970698 --format={{.State.Status}}
I1002 06:31:56.893982 1328832 ssh_runner.go:195] Run: systemctl --version
I1002 06:31:56.894040 1328832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970698
I1002 06:31:56.922152 1328832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33964 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/functional-970698/id_rsa Username:docker}
I1002 06:31:57.031293 1328832 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970698 image ls --format yaml --alsologtostderr:
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "74700000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-970698
- docker.io/kicbase/echo-server:latest
size: "4780000"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "71500000"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "50500000"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "198000000"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: f66fa892714556761a110ac5240b824cb0a49c99ecc79977b17bcb2389620e6c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-970698
size: "30"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "83700000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970698 image ls --format yaml --alsologtostderr:
I1002 06:31:52.135693 1328472 out.go:360] Setting OutFile to fd 1 ...
I1002 06:31:52.136825 1328472 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:52.136876 1328472 out.go:374] Setting ErrFile to fd 2...
I1002 06:31:52.136910 1328472 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:52.137384 1328472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
I1002 06:31:52.138554 1328472 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:52.138824 1328472 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:52.139587 1328472 cli_runner.go:164] Run: docker container inspect functional-970698 --format={{.State.Status}}
I1002 06:31:52.163267 1328472 ssh_runner.go:195] Run: systemctl --version
I1002 06:31:52.163337 1328472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970698
I1002 06:31:52.188710 1328472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33964 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/functional-970698/id_rsa Username:docker}
I1002 06:31:52.303034 1328472 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 ssh pgrep buildkitd: exit status 1 (355.747448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image build -t localhost/my-image:functional-970698 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-970698 image build -t localhost/my-image:functional-970698 testdata/build --alsologtostderr: (3.791586731s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970698 image build -t localhost/my-image:functional-970698 testdata/build --alsologtostderr:
I1002 06:31:52.802196 1328578 out.go:360] Setting OutFile to fd 1 ...
I1002 06:31:52.803940 1328578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:52.803974 1328578 out.go:374] Setting ErrFile to fd 2...
I1002 06:31:52.803984 1328578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:31:52.804513 1328578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
I1002 06:31:52.805726 1328578 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:52.807769 1328578 config.go:182] Loaded profile config "functional-970698": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 06:31:52.808369 1328578 cli_runner.go:164] Run: docker container inspect functional-970698 --format={{.State.Status}}
I1002 06:31:52.829673 1328578 ssh_runner.go:195] Run: systemctl --version
I1002 06:31:52.829767 1328578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970698
I1002 06:31:52.855660 1328578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33964 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/functional-970698/id_rsa Username:docker}
I1002 06:31:52.951549 1328578 build_images.go:161] Building image from path: /tmp/build.2417347870.tar
I1002 06:31:52.951634 1328578 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 06:31:52.961530 1328578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2417347870.tar
I1002 06:31:52.966507 1328578 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2417347870.tar: stat -c "%s %y" /var/lib/minikube/build/build.2417347870.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2417347870.tar': No such file or directory
I1002 06:31:52.966537 1328578 ssh_runner.go:362] scp /tmp/build.2417347870.tar --> /var/lib/minikube/build/build.2417347870.tar (3072 bytes)
I1002 06:31:52.986564 1328578 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2417347870
I1002 06:31:52.998325 1328578 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2417347870 -xf /var/lib/minikube/build/build.2417347870.tar
I1002 06:31:53.010604 1328578 docker.go:361] Building image: /var/lib/minikube/build/build.2417347870
I1002 06:31:53.010684 1328578 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-970698 /var/lib/minikube/build/build.2417347870
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:a2b890c993c63e104bf22ab77d626c743e75b0e4391ae3815d3ec04e8303f1a0 done
#8 naming to localhost/my-image:functional-970698 done
#8 DONE 0.1s
I1002 06:31:56.486972 1328578 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-970698 /var/lib/minikube/build/build.2417347870: (3.476259271s)
I1002 06:31:56.487048 1328578 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2417347870
I1002 06:31:56.494899 1328578 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2417347870.tar
I1002 06:31:56.506476 1328578 build_images.go:217] Built localhost/my-image:functional-970698 from /tmp/build.2417347870.tar
I1002 06:31:56.506507 1328578 build_images.go:133] succeeded building to: functional-970698
I1002 06:31:56.506512 1328578 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-970698
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image load --daemon kicbase/echo-server:functional-970698 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-970698 image load --daemon kicbase/echo-server:functional-970698 --alsologtostderr: (1.031354934s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-970698 docker-env) && out/minikube-linux-arm64 status -p functional-970698"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-970698 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image load --daemon kicbase/echo-server:functional-970698 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-970698
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image load --daemon kicbase/echo-server:functional-970698 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image save kicbase/echo-server:functional-970698 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image rm kicbase/echo-server:functional-970698 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-970698 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-970698 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-gwnmt" [4301cf02-f934-4898-83dc-b4c8b08f075c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-gwnmt" [4301cf02-f934-4898-83dc-b4c8b08f075c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003827085s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-970698
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 image save --daemon kicbase/echo-server:functional-970698 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-970698
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970698 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970698 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-970698 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1324054: os: process already finished
helpers_test.go:519: unable to terminate pid 1323926: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-970698 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970698 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-970698 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [cda8d43a-73c4-441a-89db-92a2be773f83] Pending
helpers_test.go:352: "nginx-svc" [cda8d43a-73c4-441a-89db-92a2be773f83] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [cda8d43a-73c4-441a-89db-92a2be773f83] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003438167s
I1002 06:31:23.668348 1283508 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 service list -o json
functional_test.go:1504: Took "347.551552ms" to run "out/minikube-linux-arm64 -p functional-970698 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30409
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30409
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-970698 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.153.100 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-970698 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "455.726766ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.704344ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "371.597607ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "62.448329ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdany-port4176003939/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759386695627612238" to /tmp/TestFunctionalparallelMountCmdany-port4176003939/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759386695627612238" to /tmp/TestFunctionalparallelMountCmdany-port4176003939/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759386695627612238" to /tmp/TestFunctionalparallelMountCmdany-port4176003939/001/test-1759386695627612238
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (383.156202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:31:36.011787 1283508 retry.go:31] will retry after 631.870005ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 06:31 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 06:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 06:31 test-1759386695627612238
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh cat /mount-9p/test-1759386695627612238
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-970698 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0f622999-c482-4a1e-bfd6-bef171186e27] Pending
helpers_test.go:352: "busybox-mount" [0f622999-c482-4a1e-bfd6-bef171186e27] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1002 06:31:41.059443 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [0f622999-c482-4a1e-bfd6-bef171186e27] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0f622999-c482-4a1e-bfd6-bef171186e27] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003720527s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-970698 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdany-port4176003939/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdspecific-port1817706645/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.05541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:31:44.283461 1283508 retry.go:31] will retry after 694.585465ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdspecific-port1817706645/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970698 ssh "sudo umount -f /mount-9p": exit status 1 (275.992973ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-970698 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdspecific-port1817706645/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789140240/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789140240/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789140240/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970698 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-970698 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789140240/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789140240/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup789140240/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-970698
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-970698
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-970698
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (174.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1002 06:33:57.193648 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:34:24.900777 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m53.829065321s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (174.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 kubectl -- rollout status deployment/busybox: (4.678946798s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-jl2xq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-ltkxx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-lwn2h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-jl2xq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-ltkxx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-lwn2h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-jl2xq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-ltkxx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-lwn2h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-jl2xq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-jl2xq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-ltkxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-ltkxx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-lwn2h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 kubectl -- exec busybox-7b57f96db7-lwn2h -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (39.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 node add --alsologtostderr -v 5: (38.427409686s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5: (1.023781078s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (39.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-439497 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.138327518s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 status --output json --alsologtostderr -v 5: (1.067385527s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp testdata/cp-test.txt ha-439497:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3876558994/001/cp-test_ha-439497.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497:/home/docker/cp-test.txt ha-439497-m02:/home/docker/cp-test_ha-439497_ha-439497-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test_ha-439497_ha-439497-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497:/home/docker/cp-test.txt ha-439497-m03:/home/docker/cp-test_ha-439497_ha-439497-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test_ha-439497_ha-439497-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497:/home/docker/cp-test.txt ha-439497-m04:/home/docker/cp-test_ha-439497_ha-439497-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test_ha-439497_ha-439497-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp testdata/cp-test.txt ha-439497-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3876558994/001/cp-test_ha-439497-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m02:/home/docker/cp-test.txt ha-439497:/home/docker/cp-test_ha-439497-m02_ha-439497.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test_ha-439497-m02_ha-439497.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m02:/home/docker/cp-test.txt ha-439497-m03:/home/docker/cp-test_ha-439497-m02_ha-439497-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test_ha-439497-m02_ha-439497-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m02:/home/docker/cp-test.txt ha-439497-m04:/home/docker/cp-test_ha-439497-m02_ha-439497-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test_ha-439497-m02_ha-439497-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp testdata/cp-test.txt ha-439497-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3876558994/001/cp-test_ha-439497-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m03:/home/docker/cp-test.txt ha-439497:/home/docker/cp-test_ha-439497-m03_ha-439497.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test_ha-439497-m03_ha-439497.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m03:/home/docker/cp-test.txt ha-439497-m02:/home/docker/cp-test_ha-439497-m03_ha-439497-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test_ha-439497-m03_ha-439497-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m03:/home/docker/cp-test.txt ha-439497-m04:/home/docker/cp-test_ha-439497-m03_ha-439497-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test_ha-439497-m03_ha-439497-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp testdata/cp-test.txt ha-439497-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3876558994/001/cp-test_ha-439497-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m04:/home/docker/cp-test.txt ha-439497:/home/docker/cp-test_ha-439497-m04_ha-439497.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497 "sudo cat /home/docker/cp-test_ha-439497-m04_ha-439497.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m04:/home/docker/cp-test.txt ha-439497-m02:/home/docker/cp-test_ha-439497-m04_ha-439497-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m02 "sudo cat /home/docker/cp-test_ha-439497-m04_ha-439497-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 cp ha-439497-m04:/home/docker/cp-test.txt ha-439497-m03:/home/docker/cp-test_ha-439497-m04_ha-439497-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 ssh -n ha-439497-m03 "sudo cat /home/docker/cp-test_ha-439497-m04_ha-439497-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 node stop m02 --alsologtostderr -v 5
E1002 06:36:12.962450 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:12.968741 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:12.980106 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:13.001811 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:13.043304 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:13.124874 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:13.286464 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:13.608220 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:14.249649 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:15.531077 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:18.092998 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 node stop m02 --alsologtostderr -v 5: (11.127329718s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5: exit status 7 (792.436461ms)

                                                
                                                
-- stdout --
	ha-439497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439497-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-439497-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-439497-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:36:19.833711 1350694 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:36:19.833902 1350694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:19.833929 1350694 out.go:374] Setting ErrFile to fd 2...
	I1002 06:36:19.833953 1350694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:19.834263 1350694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 06:36:19.834510 1350694 out.go:368] Setting JSON to false
	I1002 06:36:19.834576 1350694 mustload.go:65] Loading cluster: ha-439497
	I1002 06:36:19.835055 1350694 config.go:182] Loaded profile config "ha-439497": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 06:36:19.835104 1350694 status.go:174] checking status of ha-439497 ...
	I1002 06:36:19.835695 1350694 cli_runner.go:164] Run: docker container inspect ha-439497 --format={{.State.Status}}
	I1002 06:36:19.834616 1350694 notify.go:220] Checking for updates...
	I1002 06:36:19.859857 1350694 status.go:371] ha-439497 host status = "Running" (err=<nil>)
	I1002 06:36:19.859881 1350694 host.go:66] Checking if "ha-439497" exists ...
	I1002 06:36:19.860240 1350694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439497
	I1002 06:36:19.882344 1350694 host.go:66] Checking if "ha-439497" exists ...
	I1002 06:36:19.882636 1350694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:36:19.882679 1350694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439497
	I1002 06:36:19.904624 1350694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/ha-439497/id_rsa Username:docker}
	I1002 06:36:20.017806 1350694 ssh_runner.go:195] Run: systemctl --version
	I1002 06:36:20.027668 1350694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:36:20.042035 1350694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:20.109314 1350694 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 06:36:20.097961829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:20.109918 1350694 kubeconfig.go:125] found "ha-439497" server: "https://192.168.49.254:8443"
	I1002 06:36:20.109952 1350694 api_server.go:166] Checking apiserver status ...
	I1002 06:36:20.110001 1350694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:36:20.124228 1350694 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2120/cgroup
	I1002 06:36:20.134757 1350694 api_server.go:182] apiserver freezer: "5:freezer:/docker/9ff29d9f99cfd08e9d369e3f72aaa721528cd6bfa111b96cc22871f2d5738ce2/kubepods/burstable/pod9baf76a86c454e256683be02abc663a0/ab5948b07eeb04585584ee347288a628b5d2aa5218faf1c1c87f1eeefc2c08a7"
	I1002 06:36:20.134840 1350694 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9ff29d9f99cfd08e9d369e3f72aaa721528cd6bfa111b96cc22871f2d5738ce2/kubepods/burstable/pod9baf76a86c454e256683be02abc663a0/ab5948b07eeb04585584ee347288a628b5d2aa5218faf1c1c87f1eeefc2c08a7/freezer.state
	I1002 06:36:20.142970 1350694 api_server.go:204] freezer state: "THAWED"
	I1002 06:36:20.143001 1350694 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 06:36:20.151156 1350694 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 06:36:20.151183 1350694 status.go:463] ha-439497 apiserver status = Running (err=<nil>)
	I1002 06:36:20.151194 1350694 status.go:176] ha-439497 status: &{Name:ha-439497 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:36:20.151235 1350694 status.go:174] checking status of ha-439497-m02 ...
	I1002 06:36:20.151571 1350694 cli_runner.go:164] Run: docker container inspect ha-439497-m02 --format={{.State.Status}}
	I1002 06:36:20.178808 1350694 status.go:371] ha-439497-m02 host status = "Stopped" (err=<nil>)
	I1002 06:36:20.178835 1350694 status.go:384] host is not running, skipping remaining checks
	I1002 06:36:20.178844 1350694 status.go:176] ha-439497-m02 status: &{Name:ha-439497-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:36:20.178863 1350694 status.go:174] checking status of ha-439497-m03 ...
	I1002 06:36:20.179178 1350694 cli_runner.go:164] Run: docker container inspect ha-439497-m03 --format={{.State.Status}}
	I1002 06:36:20.197886 1350694 status.go:371] ha-439497-m03 host status = "Running" (err=<nil>)
	I1002 06:36:20.197913 1350694 host.go:66] Checking if "ha-439497-m03" exists ...
	I1002 06:36:20.198235 1350694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439497-m03
	I1002 06:36:20.216415 1350694 host.go:66] Checking if "ha-439497-m03" exists ...
	I1002 06:36:20.216735 1350694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:36:20.216780 1350694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439497-m03
	I1002 06:36:20.234632 1350694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33979 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/ha-439497-m03/id_rsa Username:docker}
	I1002 06:36:20.337641 1350694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:36:20.351211 1350694 kubeconfig.go:125] found "ha-439497" server: "https://192.168.49.254:8443"
	I1002 06:36:20.351241 1350694 api_server.go:166] Checking apiserver status ...
	I1002 06:36:20.351295 1350694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:36:20.365918 1350694 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2179/cgroup
	I1002 06:36:20.381809 1350694 api_server.go:182] apiserver freezer: "5:freezer:/docker/8a82c21b3ed0108151bc51907f18bd93a4d66dcc61a289bdd14fffce5765fb5d/kubepods/burstable/podab35223f7b4d76ba4d846ddb7e028292/b231ea049c9e2b77f6a017e244518233e64969a4a65c78eb4c940121b14673e6"
	I1002 06:36:20.382151 1350694 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8a82c21b3ed0108151bc51907f18bd93a4d66dcc61a289bdd14fffce5765fb5d/kubepods/burstable/podab35223f7b4d76ba4d846ddb7e028292/b231ea049c9e2b77f6a017e244518233e64969a4a65c78eb4c940121b14673e6/freezer.state
	I1002 06:36:20.393143 1350694 api_server.go:204] freezer state: "THAWED"
	I1002 06:36:20.393177 1350694 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 06:36:20.402833 1350694 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 06:36:20.402867 1350694 status.go:463] ha-439497-m03 apiserver status = Running (err=<nil>)
	I1002 06:36:20.402877 1350694 status.go:176] ha-439497-m03 status: &{Name:ha-439497-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:36:20.402895 1350694 status.go:174] checking status of ha-439497-m04 ...
	I1002 06:36:20.403216 1350694 cli_runner.go:164] Run: docker container inspect ha-439497-m04 --format={{.State.Status}}
	I1002 06:36:20.421249 1350694 status.go:371] ha-439497-m04 host status = "Running" (err=<nil>)
	I1002 06:36:20.421277 1350694 host.go:66] Checking if "ha-439497-m04" exists ...
	I1002 06:36:20.421562 1350694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-439497-m04
	I1002 06:36:20.438618 1350694 host.go:66] Checking if "ha-439497-m04" exists ...
	I1002 06:36:20.438931 1350694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:36:20.438976 1350694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-439497-m04
	I1002 06:36:20.456850 1350694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33984 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/ha-439497-m04/id_rsa Username:docker}
	I1002 06:36:20.553169 1350694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:36:20.573505 1350694 status.go:176] ha-439497-m04 status: &{Name:ha-439497-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 node start m02 --alsologtostderr -v 5
E1002 06:36:23.214660 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:33.456880 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:36:53.938391 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 node start m02 --alsologtostderr -v 5: (45.430468861s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5: (1.209443547s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (46.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.166285644s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (222.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 stop --alsologtostderr -v 5
E1002 06:37:34.899820 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 stop --alsologtostderr -v 5: (34.286463542s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 start --wait true --alsologtostderr -v 5
E1002 06:38:56.824162 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:38:57.193664 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 start --wait true --alsologtostderr -v 5: (3m8.389109918s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (222.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 node delete m03 --alsologtostderr -v 5: (10.640330185s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 stop --alsologtostderr -v 5
E1002 06:41:12.964423 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 stop --alsologtostderr -v 5: (33.114918396s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5: exit status 7 (122.028291ms)

                                                
                                                
-- stdout --
	ha-439497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-439497-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-439497-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:41:37.786252 1378951 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:41:37.786417 1378951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:37.786430 1378951 out.go:374] Setting ErrFile to fd 2...
	I1002 06:41:37.786446 1378951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:37.786765 1378951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 06:41:37.787003 1378951 out.go:368] Setting JSON to false
	I1002 06:41:37.787060 1378951 mustload.go:65] Loading cluster: ha-439497
	I1002 06:41:37.787130 1378951 notify.go:220] Checking for updates...
	I1002 06:41:37.788512 1378951 config.go:182] Loaded profile config "ha-439497": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 06:41:37.788567 1378951 status.go:174] checking status of ha-439497 ...
	I1002 06:41:37.789242 1378951 cli_runner.go:164] Run: docker container inspect ha-439497 --format={{.State.Status}}
	I1002 06:41:37.808904 1378951 status.go:371] ha-439497 host status = "Stopped" (err=<nil>)
	I1002 06:41:37.808924 1378951 status.go:384] host is not running, skipping remaining checks
	I1002 06:41:37.808931 1378951 status.go:176] ha-439497 status: &{Name:ha-439497 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:41:37.808961 1378951 status.go:174] checking status of ha-439497-m02 ...
	I1002 06:41:37.809296 1378951 cli_runner.go:164] Run: docker container inspect ha-439497-m02 --format={{.State.Status}}
	I1002 06:41:37.840188 1378951 status.go:371] ha-439497-m02 host status = "Stopped" (err=<nil>)
	I1002 06:41:37.840214 1378951 status.go:384] host is not running, skipping remaining checks
	I1002 06:41:37.840236 1378951 status.go:176] ha-439497-m02 status: &{Name:ha-439497-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:41:37.840256 1378951 status.go:174] checking status of ha-439497-m04 ...
	I1002 06:41:37.840568 1378951 cli_runner.go:164] Run: docker container inspect ha-439497-m04 --format={{.State.Status}}
	I1002 06:41:37.858584 1378951 status.go:371] ha-439497-m04 host status = "Stopped" (err=<nil>)
	I1002 06:41:37.858606 1378951 status.go:384] host is not running, skipping remaining checks
	I1002 06:41:37.858612 1378951 status.go:176] ha-439497-m04 status: &{Name:ha-439497-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (116.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1002 06:41:40.665519 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m55.508526571s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (116.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (96.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 node add --control-plane --alsologtostderr -v 5
E1002 06:43:57.193515 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 node add --control-plane --alsologtostderr -v 5: (1m35.24779882s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-439497 status --alsologtostderr -v 5: (1.118319885s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (96.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.059417379s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (36.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-382118 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-382118 --driver=docker  --container-runtime=docker: (36.227809043s)
--- PASS: TestImageBuild/serial/Setup (36.23s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-382118
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-382118: (1.806860337s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-382118
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-382118: (1.240982164s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-382118
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-382118
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-544701 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1002 06:46:12.966062 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-544701 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m14.45156661s)
--- PASS: TestJSONOutput/start/Command (74.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-544701 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-544701 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-544701 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-544701 --output=json --user=testUser: (5.878917391s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-074234 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-074234 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.289353ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b30c6be-c016-4733-b065-7d222fd7e6ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-074234] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"555c9acc-b4d2-435a-8787-9bed5539a6ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"b548b3bd-fee6-4927-b7e9-32bcecc5dbb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8bf07b13-ee9a-4688-9940-f88f570e57df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig"}}
	{"specversion":"1.0","id":"f83322c2-cde4-443b-8c74-63390d15b6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube"}}
	{"specversion":"1.0","id":"6ec3cdd2-c7d9-4831-8b59-51d4d1510d3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dd142843-6192-4480-8969-d28be5588b3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e1a7f776-a517-4973-a4f2-05addd2877b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-074234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-074234
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-148973 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-148973 --network=: (39.812516255s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-148973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-148973
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-148973: (2.084038283s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-305917 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-305917 --network=bridge: (35.441043409s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-305917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-305917
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-305917: (2.117266696s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.61s)

                                                
                                    
x
+
TestKicExistingNetwork (36.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 06:48:47.339742 1283508 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 06:48:47.353908 1283508 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 06:48:47.353998 1283508 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 06:48:47.354019 1283508 cli_runner.go:164] Run: docker network inspect existing-network
W1002 06:48:47.369770 1283508 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 06:48:47.369801 1283508 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 06:48:47.369815 1283508 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 06:48:47.369936 1283508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 06:48:47.386227 1283508 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-200b88fe63d3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:19:92:75:0d} reservation:<nil>}
I1002 06:48:47.386506 1283508 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40002cfd50}
I1002 06:48:47.386527 1283508 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 06:48:47.386576 1283508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 06:48:47.445759 1283508 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-789795 --network=existing-network
E1002 06:48:57.194215 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-789795 --network=existing-network: (34.209857297s)
helpers_test.go:175: Cleaning up "existing-network-789795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-789795
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-789795: (1.989565714s)
I1002 06:49:23.661455 1283508 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.34s)

                                                
                                    
x
+
TestKicCustomSubnet (39.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-720469 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-720469 --subnet=192.168.60.0/24: (37.745930604s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-720469 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-720469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-720469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-720469: (2.172421288s)
--- PASS: TestKicCustomSubnet (39.95s)

                                                
                                    
x
+
TestKicStaticIP (39.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-480348 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-480348 --static-ip=192.168.200.200: (37.605873744s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-480348 ip
helpers_test.go:175: Cleaning up "static-ip-480348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-480348
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-480348: (2.071088126s)
--- PASS: TestKicStaticIP (39.83s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-184111 --driver=docker  --container-runtime=docker
E1002 06:51:12.964625 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-184111 --driver=docker  --container-runtime=docker: (33.138879957s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-186753 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-186753 --driver=docker  --container-runtime=docker: (33.910228202s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-184111
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-186753
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-186753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-186753
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-186753: (2.165674881s)
helpers_test.go:175: Cleaning up "first-184111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-184111
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-184111: (2.189276683s)
--- PASS: TestMinikubeProfile (72.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-059673 --memory=3072 --mount-string /tmp/TestMountStartserial2503305418/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-059673 --memory=3072 --mount-string /tmp/TestMountStartserial2503305418/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.7255993s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-059673 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (11.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-061584 --memory=3072 --mount-string /tmp/TestMountStartserial2503305418/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-061584 --memory=3072 --mount-string /tmp/TestMountStartserial2503305418/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.115702845s)
--- PASS: TestMountStart/serial/StartWithMountSecond (11.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-061584 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.5s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-059673 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-059673 --alsologtostderr -v=5: (1.499697149s)
--- PASS: TestMountStart/serial/DeleteFirst (1.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-061584 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-061584
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-061584: (1.209289363s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-061584
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-061584: (7.394219142s)
--- PASS: TestMountStart/serial/RestartStopped (8.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-061584 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-355238 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1002 06:52:36.027823 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:53:57.194389 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-355238 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.366716315s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-355238 -- rollout status deployment/busybox: (3.358768115s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-4fl75 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-n7vbq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-4fl75 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-n7vbq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-4fl75 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-n7vbq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-4fl75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-4fl75 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-n7vbq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-355238 -- exec busybox-7b57f96db7-n7vbq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-355238 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-355238 -v=5 --alsologtostderr: (34.203776866s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-355238 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp testdata/cp-test.txt multinode-355238:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2971785481/001/cp-test_multinode-355238.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238:/home/docker/cp-test.txt multinode-355238-m02:/home/docker/cp-test_multinode-355238_multinode-355238-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m02 "sudo cat /home/docker/cp-test_multinode-355238_multinode-355238-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238:/home/docker/cp-test.txt multinode-355238-m03:/home/docker/cp-test_multinode-355238_multinode-355238-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m03 "sudo cat /home/docker/cp-test_multinode-355238_multinode-355238-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp testdata/cp-test.txt multinode-355238-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2971785481/001/cp-test_multinode-355238-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238-m02:/home/docker/cp-test.txt multinode-355238:/home/docker/cp-test_multinode-355238-m02_multinode-355238.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238 "sudo cat /home/docker/cp-test_multinode-355238-m02_multinode-355238.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238-m02:/home/docker/cp-test.txt multinode-355238-m03:/home/docker/cp-test_multinode-355238-m02_multinode-355238-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m03 "sudo cat /home/docker/cp-test_multinode-355238-m02_multinode-355238-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp testdata/cp-test.txt multinode-355238-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2971785481/001/cp-test_multinode-355238-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238-m03:/home/docker/cp-test.txt multinode-355238:/home/docker/cp-test_multinode-355238-m03_multinode-355238.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238 "sudo cat /home/docker/cp-test_multinode-355238-m03_multinode-355238.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 cp multinode-355238-m03:/home/docker/cp-test.txt multinode-355238-m02:/home/docker/cp-test_multinode-355238-m03_multinode-355238-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 ssh -n multinode-355238-m02 "sudo cat /home/docker/cp-test_multinode-355238-m03_multinode-355238-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-355238 node stop m03: (1.255333673s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-355238 status: exit status 7 (522.901773ms)

                                                
                                                
-- stdout --
	multinode-355238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-355238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-355238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr: exit status 7 (541.838022ms)

                                                
                                                
-- stdout --
	multinode-355238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-355238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-355238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:54:57.517278 1452978 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:54:57.517449 1452978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:54:57.517478 1452978 out.go:374] Setting ErrFile to fd 2...
	I1002 06:54:57.517498 1452978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:54:57.517891 1452978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 06:54:57.518153 1452978 out.go:368] Setting JSON to false
	I1002 06:54:57.518216 1452978 mustload.go:65] Loading cluster: multinode-355238
	I1002 06:54:57.519081 1452978 notify.go:220] Checking for updates...
	I1002 06:54:57.519325 1452978 config.go:182] Loaded profile config "multinode-355238": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 06:54:57.519356 1452978 status.go:174] checking status of multinode-355238 ...
	I1002 06:54:57.520007 1452978 cli_runner.go:164] Run: docker container inspect multinode-355238 --format={{.State.Status}}
	I1002 06:54:57.542428 1452978 status.go:371] multinode-355238 host status = "Running" (err=<nil>)
	I1002 06:54:57.542456 1452978 host.go:66] Checking if "multinode-355238" exists ...
	I1002 06:54:57.542750 1452978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-355238
	I1002 06:54:57.574776 1452978 host.go:66] Checking if "multinode-355238" exists ...
	I1002 06:54:57.575138 1452978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:54:57.575182 1452978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-355238
	I1002 06:54:57.594828 1452978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34094 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/multinode-355238/id_rsa Username:docker}
	I1002 06:54:57.694071 1452978 ssh_runner.go:195] Run: systemctl --version
	I1002 06:54:57.700739 1452978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:54:57.713704 1452978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:54:57.775580 1452978 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 06:54:57.765847434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:54:57.776230 1452978 kubeconfig.go:125] found "multinode-355238" server: "https://192.168.67.2:8443"
	I1002 06:54:57.776273 1452978 api_server.go:166] Checking apiserver status ...
	I1002 06:54:57.776319 1452978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:54:57.789992 1452978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2067/cgroup
	I1002 06:54:57.798780 1452978 api_server.go:182] apiserver freezer: "5:freezer:/docker/46782f3d90c7f98a5fa2c2e0d62bc6c5e26c6494fc363e344a45aee99f20485a/kubepods/burstable/pod4071282dfbaafefce713e95534bbed42/ba01b300e1d8d3edbbbbf77f7db7f2a75aa83818ccbf30cabff5f43229dacfe7"
	I1002 06:54:57.798853 1452978 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/46782f3d90c7f98a5fa2c2e0d62bc6c5e26c6494fc363e344a45aee99f20485a/kubepods/burstable/pod4071282dfbaafefce713e95534bbed42/ba01b300e1d8d3edbbbbf77f7db7f2a75aa83818ccbf30cabff5f43229dacfe7/freezer.state
	I1002 06:54:57.806885 1452978 api_server.go:204] freezer state: "THAWED"
	I1002 06:54:57.806913 1452978 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 06:54:57.815409 1452978 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 06:54:57.815437 1452978 status.go:463] multinode-355238 apiserver status = Running (err=<nil>)
	I1002 06:54:57.815448 1452978 status.go:176] multinode-355238 status: &{Name:multinode-355238 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:54:57.815464 1452978 status.go:174] checking status of multinode-355238-m02 ...
	I1002 06:54:57.815763 1452978 cli_runner.go:164] Run: docker container inspect multinode-355238-m02 --format={{.State.Status}}
	I1002 06:54:57.832661 1452978 status.go:371] multinode-355238-m02 host status = "Running" (err=<nil>)
	I1002 06:54:57.832689 1452978 host.go:66] Checking if "multinode-355238-m02" exists ...
	I1002 06:54:57.832998 1452978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-355238-m02
	I1002 06:54:57.855254 1452978 host.go:66] Checking if "multinode-355238-m02" exists ...
	I1002 06:54:57.855584 1452978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:54:57.855640 1452978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-355238-m02
	I1002 06:54:57.873500 1452978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34099 SSHKeyPath:/home/jenkins/minikube-integration/21643-1281649/.minikube/machines/multinode-355238-m02/id_rsa Username:docker}
	I1002 06:54:57.969427 1452978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:54:57.982255 1452978 status.go:176] multinode-355238-m02 status: &{Name:multinode-355238-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:54:57.982289 1452978 status.go:174] checking status of multinode-355238-m03 ...
	I1002 06:54:57.982612 1452978 cli_runner.go:164] Run: docker container inspect multinode-355238-m03 --format={{.State.Status}}
	I1002 06:54:58.000102 1452978 status.go:371] multinode-355238-m03 host status = "Stopped" (err=<nil>)
	I1002 06:54:58.000131 1452978 status.go:384] host is not running, skipping remaining checks
	I1002 06:54:58.000139 1452978 status.go:176] multinode-355238-m03 status: &{Name:multinode-355238-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-355238 node start m03 -v=5 --alsologtostderr: (10.253337512s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-355238
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-355238
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-355238: (22.731181494s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-355238 --wait=true -v=5 --alsologtostderr
E1002 06:56:12.962048 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-355238 --wait=true -v=5 --alsologtostderr: (56.833714768s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-355238
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-355238 node delete m03: (5.114648901s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-355238 stop: (21.736034932s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-355238 status: exit status 7 (98.467239ms)

                                                
                                                
-- stdout --
	multinode-355238
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-355238-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr: exit status 7 (93.378417ms)

                                                
                                                
-- stdout --
	multinode-355238
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-355238-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:56:56.442707 1466679 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:56:56.443092 1466679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:56:56.443123 1466679 out.go:374] Setting ErrFile to fd 2...
	I1002 06:56:56.443146 1466679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:56:56.443449 1466679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-1281649/.minikube/bin
	I1002 06:56:56.443688 1466679 out.go:368] Setting JSON to false
	I1002 06:56:56.443739 1466679 mustload.go:65] Loading cluster: multinode-355238
	I1002 06:56:56.444205 1466679 config.go:182] Loaded profile config "multinode-355238": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 06:56:56.444242 1466679 status.go:174] checking status of multinode-355238 ...
	I1002 06:56:56.444800 1466679 cli_runner.go:164] Run: docker container inspect multinode-355238 --format={{.State.Status}}
	I1002 06:56:56.445012 1466679 notify.go:220] Checking for updates...
	I1002 06:56:56.463257 1466679 status.go:371] multinode-355238 host status = "Stopped" (err=<nil>)
	I1002 06:56:56.463279 1466679 status.go:384] host is not running, skipping remaining checks
	I1002 06:56:56.463286 1466679 status.go:176] multinode-355238 status: &{Name:multinode-355238 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:56:56.463326 1466679 status.go:174] checking status of multinode-355238-m02 ...
	I1002 06:56:56.463680 1466679 cli_runner.go:164] Run: docker container inspect multinode-355238-m02 --format={{.State.Status}}
	I1002 06:56:56.485738 1466679 status.go:371] multinode-355238-m02 host status = "Stopped" (err=<nil>)
	I1002 06:56:56.485763 1466679 status.go:384] host is not running, skipping remaining checks
	I1002 06:56:56.485776 1466679 status.go:176] multinode-355238-m02 status: &{Name:multinode-355238-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-355238 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-355238 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (52.469370073s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-355238 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-355238
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-355238-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-355238-m02 --driver=docker  --container-runtime=docker: exit status 14 (95.575924ms)

                                                
                                                
-- stdout --
	* [multinode-355238-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-355238-m02' is duplicated with machine name 'multinode-355238-m02' in profile 'multinode-355238'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-355238-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-355238-m03 --driver=docker  --container-runtime=docker: (32.09665369s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-355238
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-355238: exit status 80 (359.733074ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-355238 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-355238-m03 already exists in multinode-355238-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-355238-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-355238-m03: (2.143191349s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.75s)

                                                
                                    
x
+
TestPreload (120.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-876119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E1002 06:58:57.194240 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-876119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (49.646528595s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-876119 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-876119 image pull gcr.io/k8s-minikube/busybox: (2.418019737s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-876119
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-876119: (10.904755071s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-876119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-876119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (55.086514067s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-876119 image list
helpers_test.go:175: Cleaning up "test-preload-876119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-876119
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-876119: (2.284246683s)
--- PASS: TestPreload (120.58s)

                                                
                                    
x
+
TestSkaffold (144.85s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe866504916 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-509884 --memory=3072 --driver=docker  --container-runtime=docker
E1002 07:01:12.962514 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-509884 --memory=3072 --driver=docker  --container-runtime=docker: (38.215428795s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe866504916 run --minikube-profile skaffold-509884 --kube-context skaffold-509884 --status-check=true --port-forward=false --interactive=false
E1002 07:02:00.267422 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe866504916 run --minikube-profile skaffold-509884 --kube-context skaffold-509884 --status-check=true --port-forward=false --interactive=false: (1m31.188534711s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-d94cbcb69-gdmwf" [71be601f-bf10-4ce3-b0d6-2e11f1c1788d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003948227s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-6885f4fb65-cr5g8" [9ffd7dc5-7181-4790-92d4-22638384e253] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002897861s
helpers_test.go:175: Cleaning up "skaffold-509884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-509884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-509884: (2.945617209s)
--- PASS: TestSkaffold (144.85s)

                                                
                                    
x
+
TestInsufficientStorage (14.03s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-142722 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-142722 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.710189331s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ccd76f09-5e73-4145-b701-781bb20520e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-142722] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc2d8650-537a-43e0-9299-cd82b9154384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"13d196a7-4fca-4e94-b741-bc7ed066702b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cb036027-2403-4f4f-b867-642379882b76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig"}}
	{"specversion":"1.0","id":"868bd0e4-5643-4001-8161-17a9625f4405","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube"}}
	{"specversion":"1.0","id":"ca4a99da-7c77-4e2e-92a3-2c3fd413a797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c721b72c-60a6-4d60-a8b4-79e25beaed04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1ff33ed8-6b9c-40a7-8cfc-de12400c3c40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"172c2d2c-46b1-4d5b-b8fb-13f1e0dd47fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d2b1bdae-c033-4347-b357-25a5448ab072","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f10c079-5ae0-483b-b207-3e1dd584d7c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3f3124e9-1495-4463-8f19-d4ed7d1caf33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-142722\" primary control-plane node in \"insufficient-storage-142722\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c94117b-8f1a-4b6e-823f-c8548d4ce116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7719b56a-bde2-47a3-a346-3ec2c7872fc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6326521d-48b7-4bff-bbf1-d34862b0c854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-142722 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-142722 --output=json --layout=cluster: exit status 7 (309.582987ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-142722","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-142722","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:03:48.074637 1499786 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-142722" does not appear in /home/jenkins/minikube-integration/21643-1281649/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-142722 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-142722 --output=json --layout=cluster: exit status 7 (307.434338ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-142722","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-142722","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:03:48.382301 1499852 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-142722" does not appear in /home/jenkins/minikube-integration/21643-1281649/kubeconfig
	E1002 07:03:48.392785 1499852 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/insufficient-storage-142722/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-142722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-142722
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-142722: (1.703409256s)
--- PASS: TestInsufficientStorage (14.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (93.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3962613192 start -p running-upgrade-665881 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1002 07:11:05.955720 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:11:12.962018 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3962613192 start -p running-upgrade-665881 --memory=3072 --vm-driver=docker  --container-runtime=docker: (41.38173762s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-665881 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-665881 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.842037883s)
helpers_test.go:175: Cleaning up "running-upgrade-665881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-665881
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-665881: (2.189575336s)
--- PASS: TestRunningBinaryUpgrade (93.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.785236599s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-358891
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-358891: (1.511643716s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-358891 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-358891 status --format={{.Host}}: exit status 7 (112.750851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m41.147480898s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-358891 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (134.666585ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-358891] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-358891
	    minikube start -p kubernetes-upgrade-358891 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3588912 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-358891 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-358891 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.050856635s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-358891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-358891
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-358891: (2.971996548s)
--- PASS: TestKubernetesUpgrade (384.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (105.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.779073598 start -p missing-upgrade-964755 --memory=3072 --driver=docker  --container-runtime=docker
E1002 07:09:16.030148 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.779073598 start -p missing-upgrade-964755 --memory=3072 --driver=docker  --container-runtime=docker: (34.868179099s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-964755
E1002 07:09:44.033556 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-964755: (10.42581769s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-964755
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-964755 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-964755 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (56.949745502s)
helpers_test.go:175: Cleaning up "missing-upgrade-964755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-964755
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-964755: (2.541240145s)
--- PASS: TestMissingContainerUpgrade (105.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-760549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-760549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (109.037459ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-760549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-1281649/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-1281649/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-760549 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 07:03:57.194314 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-760549 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.559057835s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-760549 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-760549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-760549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.190366147s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-760549 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-760549 status -o json: exit status 2 (376.035006ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-760549","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-760549
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-760549: (1.756247446s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-760549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-760549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (10.988633655s)
--- PASS: TestNoKubernetes/serial/Start (10.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-760549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-760549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.061596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-760549
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-760549: (1.244703718s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-760549 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-760549 --driver=docker  --container-runtime=docker: (7.745636391s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-760549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-760549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.363151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3779640920 start -p stopped-upgrade-097954 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1002 07:08:22.096228 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:22.102629 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:22.113917 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:22.135267 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:22.176566 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:22.257913 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:22.420372 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:22.741956 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:23.383838 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:24.665124 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:27.226981 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3779640920 start -p stopped-upgrade-097954 --memory=3072 --vm-driver=docker  --container-runtime=docker: (1m0.200214482s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3779640920 -p stopped-upgrade-097954 stop
E1002 07:08:32.348398 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3779640920 -p stopped-upgrade-097954 stop: (10.878662391s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-097954 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 07:08:42.589751 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:08:57.194084 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:09:03.071683 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-097954 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.375116897s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-097954
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-097954: (1.134074625s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestPause/serial/Start (78.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-154499 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1002 07:13:22.095045 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-154499 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m18.977248707s)
--- PASS: TestPause/serial/Start (78.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-154499 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 07:13:49.797299 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:57.193758 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-154499 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.549302329s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.58s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-154499 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-154499 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-154499 --output=json --layout=cluster: exit status 2 (327.435313ms)

                                                
                                                
-- stdout --
	{"Name":"pause-154499","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-154499","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-154499 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-154499 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-154499 --alsologtostderr -v=5: (1.003166196s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-154499 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-154499 --alsologtostderr -v=5: (2.214657822s)
--- PASS: TestPause/serial/DeletePaused (2.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.03s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (15.971386502s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-154499
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-154499: exit status 1 (21.263496ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-154499: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (55.767859055s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-980383 "pgrep -a kubelet"
I1002 07:15:53.458424 1283508 config.go:182] Loaded profile config "auto-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-980383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zf46m" [5e9eaf61-d7be-4951-ba5b-a2c4178f10f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zf46m" [5e9eaf61-d7be-4951-ba5b-a2c4178f10f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00545172s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m8.212039334s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (58.904963025s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7j5xn" [bf16c5a8-2e23-4872-b282-64f2beac71f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003507954s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-980383 "pgrep -a kubelet"
I1002 07:17:43.960004 1283508 config.go:182] Loaded profile config "kindnet-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-980383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f6kbl" [14108f6e-aeda-428f-bfd2-f7d644aa3b10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f6kbl" [14108f6e-aeda-428f-bfd2-f7d644aa3b10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004018808s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-h87kq" [887faece-0515-43ad-a594-cb955ed7d894] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004950934s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-980383 "pgrep -a kubelet"
I1002 07:18:13.250672 1283508 config.go:182] Loaded profile config "calico-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-980383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tg6wb" [be7427d6-da19-4ecd-8a37-5d88b948af21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tg6wb" [be7427d6-da19-4ecd-8a37-5d88b948af21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005046173s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1002 07:18:22.098190 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (59.288701584s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (77.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E1002 07:18:57.193799 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m17.6578391s)
--- PASS: TestNetworkPlugins/group/false/Start (77.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-980383 "pgrep -a kubelet"
I1002 07:19:20.594725 1283508 config.go:182] Loaded profile config "custom-flannel-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-980383 replace --force -f testdata/netcat-deployment.yaml
I1002 07:19:20.965569 1283508 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-92gfn" [9d7549fc-2a77-4ef2-a376-3a38018b9973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-92gfn" [9d7549fc-2a77-4ef2-a376-3a38018b9973] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.007128727s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m21.815329557s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-980383 "pgrep -a kubelet"
I1002 07:20:10.587734 1283508 config.go:182] Loaded profile config "false-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-980383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kgq69" [95617b54-186e-4a4d-bf33-b6d5966c7ac2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kgq69" [95617b54-186e-4a4d-bf33-b6d5966c7ac2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00557186s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E1002 07:20:53.720202 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:53.726585 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:53.738000 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:53.759354 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:53.800749 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:53.882127 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:54.043652 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:54.365251 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:55.007475 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:56.289164 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:20:58.850620 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:21:03.971947 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:21:12.962471 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:21:14.214074 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (52.681579225s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-980383 "pgrep -a kubelet"
I1002 07:21:16.932694 1283508 config.go:182] Loaded profile config "enable-default-cni-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-980383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2w6tn" [78a48b44-8baf-45ce-a2a5-b672d2a8c77f] Pending
helpers_test.go:352: "netcat-cd4db9dbf-2w6tn" [78a48b44-8baf-45ce-a2a5-b672d2a8c77f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006762315s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-pz967" [3e3dc4aa-e8c6-40b9-aad6-d1c5f4309a94] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00648672s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-980383 "pgrep -a kubelet"
I1002 07:21:45.710377 1283508 config.go:182] Loaded profile config "flannel-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-980383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-85tdz" [6f9fb423-3d1a-4151-a4f8-0091688f0787] Pending
helpers_test.go:352: "netcat-cd4db9dbf-85tdz" [6f9fb423-3d1a-4151-a4f8-0091688f0787] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-85tdz" [6f9fb423-3d1a-4151-a4f8-0091688f0787] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.013250286s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m20.761463314s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (83.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1002 07:22:37.621758 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:37.628109 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:37.639459 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:37.661316 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:37.704782 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:37.786179 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:37.948152 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:38.269735 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:38.911623 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:40.193948 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:42.756227 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:47.877569 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:58.118905 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:06.796625 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:06.803096 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:06.814992 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:06.836431 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:06.877802 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:06.959199 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:07.120785 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:07.442565 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:08.084343 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:09.366383 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:11.928319 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-980383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m23.409840745s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (83.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-980383 "pgrep -a kubelet"
I1002 07:23:13.742471 1283508 config.go:182] Loaded profile config "bridge-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-980383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dl8vv" [3824d688-06b8-4001-b1e6-e14f0bd8bbd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 07:23:17.050489 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dl8vv" [3824d688-06b8-4001-b1e6-e14f0bd8bbd3] Running
E1002 07:23:18.600252 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:22.095270 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004197032s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (97.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-994555 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1002 07:23:47.774289 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-994555 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m37.771941415s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (97.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-980383 "pgrep -a kubelet"
I1002 07:23:50.109299 1283508 config.go:182] Loaded profile config "kubenet-980383": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-980383 replace --force -f testdata/netcat-deployment.yaml
I1002 07:23:50.509644 1283508 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c7hp4" [800635c6-1875-4326-acc7-c2597745af43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c7hp4" [800635c6-1875-4326-acc7-c2597745af43] Running
E1002 07:23:57.194270 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:59.561510 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.005003787s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-980383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-980383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-499633 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 07:24:28.738064 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:24:31.204358 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/custom-flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:24:41.445801 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/custom-flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:24:45.158829 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:01.927143 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/custom-flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:10.921465 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:10.928166 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:10.939486 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:10.960822 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:11.002216 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:11.083763 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:11.245118 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:11.566948 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:12.209085 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:13.491149 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:16.053056 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:21.174938 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:21.483092 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-499633 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m26.840884354s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-994555 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e99c774f-15e2-4ddb-a7bc-bc2b0cda1aec] Pending
helpers_test.go:352: "busybox" [e99c774f-15e2-4ddb-a7bc-bc2b0cda1aec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e99c774f-15e2-4ddb-a7bc-bc2b0cda1aec] Running
E1002 07:25:31.417296 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004025025s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-994555 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-994555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-994555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047041721s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-994555 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-994555 --alsologtostderr -v=3
E1002 07:25:42.888512 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/custom-flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-994555 --alsologtostderr -v=3: (10.984071366s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-994555 -n old-k8s-version-994555
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-994555 -n old-k8s-version-994555: exit status 7 (91.519235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-994555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-994555 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1002 07:25:50.662133 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:51.899250 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:25:53.720192 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-994555 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (57.001720709s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-994555 -n old-k8s-version-994555
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-499633 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0f761134-6853-4ad4-bf16-54cb67cc3c70] Pending
helpers_test.go:352: "busybox" [0f761134-6853-4ad4-bf16-54cb67cc3c70] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 07:25:56.032348 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [0f761134-6853-4ad4-bf16-54cb67cc3c70] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003590383s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-499633 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-499633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-499633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.236964003s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-499633 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-499633 --alsologtostderr -v=3
E1002 07:26:12.962454 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-499633 --alsologtostderr -v=3: (11.29446615s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-499633 -n no-preload-499633
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-499633 -n no-preload-499633: exit status 7 (92.546533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-499633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-499633 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 07:26:17.272211 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:17.278561 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:17.289938 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:17.311411 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:17.352760 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:17.434100 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:17.595608 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:17.917246 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:18.559150 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:19.841145 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:21.421384 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:22.402725 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:27.524771 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:32.861065 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:37.766745 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.195266 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.201626 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.212964 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.234424 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.275821 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.357529 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.519605 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:39.841274 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:40.483449 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:26:41.765661 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-499633 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (53.779097965s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-499633 -n no-preload-499633
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9lwfl" [bfc38533-abb2-4ec2-8e81-84a7bd8e8c08] Running
E1002 07:26:44.326957 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003308617s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9lwfl" [bfc38533-abb2-4ec2-8e81-84a7bd8e8c08] Running
E1002 07:26:49.449089 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003199769s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-994555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-994555 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-994555 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-994555 -n old-k8s-version-994555
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-994555 -n old-k8s-version-994555: exit status 2 (378.498792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-994555 -n old-k8s-version-994555
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-994555 -n old-k8s-version-994555: exit status 2 (386.697713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-994555 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-994555 -n old-k8s-version-994555
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-994555 -n old-k8s-version-994555
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-485497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 07:27:04.810564 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/custom-flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-485497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m21.571842286s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vz92v" [d6728332-97ca-4ac0-b286-bcbb0dd3ab33] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004588284s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vz92v" [d6728332-97ca-4ac0-b286-bcbb0dd3ab33] Running
E1002 07:27:20.172471 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004095322s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-499633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-499633 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-499633 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-499633 -n no-preload-499633
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-499633 -n no-preload-499633: exit status 2 (412.407245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-499633 -n no-preload-499633
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-499633 -n no-preload-499633: exit status 2 (404.242933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-499633 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-499633 -n no-preload-499633
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-499633 -n no-preload-499633
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-684649 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 07:27:37.621746 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:27:39.210083 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:27:54.782628 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:01.134010 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:05.324869 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kindnet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:06.796632 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:14.032710 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:14.039005 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:14.050322 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:14.072606 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:14.113952 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:14.195366 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:14.356847 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-684649 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (45.668203556s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-684649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 07:28:14.678547 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:15.320888 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-684649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.102878137s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-684649 --alsologtostderr -v=3
E1002 07:28:16.603012 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:19.165432 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-684649 --alsologtostderr -v=3: (5.921675157s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684649 -n newest-cni-684649
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684649 -n newest-cni-684649: exit status 7 (77.118777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-684649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-684649 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-684649 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (20.222828587s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684649 -n newest-cni-684649
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-485497 create -f testdata/busybox.yaml
E1002 07:28:22.094426 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/skaffold-509884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [61c771cd-3afe-4936-811e-f1ee9b4b1bed] Pending
helpers_test.go:352: "busybox" [61c771cd-3afe-4936-811e-f1ee9b4b1bed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 07:28:24.287515 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [61c771cd-3afe-4936-811e-f1ee9b4b1bed] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00393551s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-485497 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-485497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-485497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.658839523s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-485497 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-485497 --alsologtostderr -v=3
E1002 07:28:34.503764 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/calico-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:34.529407 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-485497 --alsologtostderr -v=3: (11.642165276s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-684649 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-684649 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684649 -n newest-cni-684649
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684649 -n newest-cni-684649: exit status 2 (313.95264ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684649 -n newest-cni-684649
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684649 -n newest-cni-684649: exit status 2 (339.668853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-684649 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684649 -n newest-cni-684649
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684649 -n newest-cni-684649
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-485497 -n embed-certs-485497
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-485497 -n embed-certs-485497: exit status 7 (125.295168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-485497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-485497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-485497 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (58.268024943s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-485497 -n embed-certs-485497
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-507749 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 07:28:50.458166 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:50.464921 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:50.476319 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:50.497710 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:50.539101 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:50.620476 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:50.782413 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:51.103852 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:51.745408 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:53.027671 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:55.012139 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:55.592323 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:28:57.193399 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/addons-096496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:29:00.714695 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:29:01.131907 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:29:10.956812 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:29:20.923329 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/custom-flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:29:23.055895 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:29:31.438962 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:29:35.973720 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-507749 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m20.761076629s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-szpxj" [caac4171-8d92-4497-af1c-d4518f330d76] Running
E1002 07:29:48.652237 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/custom-flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003890293s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-szpxj" [caac4171-8d92-4497-af1c-d4518f330d76] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004190941s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-485497 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-485497 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-485497 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-485497 -n embed-certs-485497
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-485497 -n embed-certs-485497: exit status 2 (348.890548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-485497 -n embed-certs-485497
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-485497 -n embed-certs-485497: exit status 2 (341.572965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-485497 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-485497 -n embed-certs-485497
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-485497 -n embed-certs-485497
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-485497 -n embed-certs-485497: (1.023669961s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-507749 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3b1f110d-b15a-47ae-b69d-76d16c8290d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 07:30:10.921586 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:12.401037 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [3b1f110d-b15a-47ae-b69d-76d16c8290d3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004542365s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-507749 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-507749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-507749 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-507749 --alsologtostderr -v=3
E1002 07:30:22.760211 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:22.766558 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:22.777929 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:22.800565 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:22.842160 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:22.924284 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:23.086039 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:23.408595 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:24.050153 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:25.331647 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:27.893341 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-507749 --alsologtostderr -v=3: (10.832717826s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749: exit status 7 (73.140648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-507749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-507749 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 07:30:33.015543 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:38.624403 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/false-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:43.257391 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:53.720751 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/auto-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.125367 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.131736 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.143219 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.165065 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.206412 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.287806 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.449289 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:54.770956 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:55.412393 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:56.694187 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:57.895791 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/bridge-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:30:59.255516 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:31:03.738876 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/old-k8s-version-994555/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:31:04.376978 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:31:12.962381 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/functional-970698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:31:14.618375 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:31:17.272021 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/enable-default-cni-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-507749 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (52.581710655s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7p8tg" [d23e4d1e-6b30-429b-b219-6c816c5067d6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008283721s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7p8tg" [d23e4d1e-6b30-429b-b219-6c816c5067d6] Running
E1002 07:31:34.323352 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/kubenet-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:31:35.099974 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/no-preload-499633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004227816s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-507749 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-507749 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-507749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749: exit status 2 (314.947937ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749
E1002 07:31:39.195512 1283508 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/flannel-980383/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749: exit status 2 (321.766231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-507749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-507749 -n default-k8s-diff-port-507749
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    

Test skip (26/347)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-000546 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-000546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-000546
--- SKIP: TestDownloadOnlyKic (0.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-980383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-980383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-1281649/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 07:04:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-200011
contexts:
- context:
cluster: offline-docker-200011
extensions:
- extension:
last-update: Thu, 02 Oct 2025 07:04:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-docker-200011
name: offline-docker-200011
current-context: offline-docker-200011
kind: Config
preferences: {}
users:
- name: offline-docker-200011
user:
client-certificate: /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/offline-docker-200011/client.crt
client-key: /home/jenkins/minikube-integration/21643-1281649/.minikube/profiles/offline-docker-200011/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-980383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-980383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980383"

                                                
                                                
----------------------- debugLogs end: cilium-980383 [took: 3.989707808s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-980383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-980383
--- SKIP: TestNetworkPlugins/group/cilium (4.15s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-839009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-839009
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard